title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
sequence |
---|---|---|---|---|---|---|---|---|---|
finding a special path string in HTML text in python | 40,107,690 | <p>I'm trying to extract a path in an HTML file that I read.
In this case the path that I'm looking for is a logo from google's main site.</p>
<p>I'm pretty sure that the regular expression I defined is right, but I guess I'm missing something.</p>
<p>The code is:</p>
<pre><code>import re
import urllib
a=urllib.urlopen ('https://www.google.co.il/')
Text = a.read(250)
print Text
print '\n\n'
b= re.search (r'\"\/[a-z0-9 ]*',Text)
print format(b.group(0))
</code></pre>
<p>The actual text that I want to get is:</p>
<p><strong>/images/branding/googleg/1x/googleg_standard_color_128dp.png</strong></p>
<p>I'd really appreciate it if someone could point me in the right direction</p>
| 2 | 2016-10-18T11:59:35Z | 40,108,138 | <p>Here's my answer:</p>
<pre><code>import re
import urllib
a=urllib.urlopen ('https://www.google.co.il/')
text = a.read(250)
print text
print '\n\n'
b= re.search (r'\"(\/[a-z0-9_. ]+)+\"',text)
print format(b.group(0))
</code></pre>
<p>Run gives:</p>
<pre><code>>>> python stackoverflow.py
<!doctype html><html dir="rtl" itemscope="" itemtype="http://schema.org/WebPage" lang="iw"><head><meta content="text/html; charset=UTF-8" http-equiv="Content-Type"><meta content="/images/branding/googleg/1x/googleg_standard_color_128dp.png" itemprop=
"/images/branding/googleg/1x/googleg_standard_color_128dp.png"
</code></pre>
<p>Explanation o the regex <code>\"(\/[a-z0-9_. ]+)+\"</code> : firstly, in the string name of the picture, you miss <code>.</code> and <code>_</code>. You need to add these two into the square brackets as well since they appear in the path. <code>\/[a-z0-9_. ]+</code> matches a patter with <code>/</code> followed by some string wtih length at least 1. <code>(\/[a-z0-9_. ]+)+</code> replicates the previous match to allow for multiple matches of paths that have more than 1 folders. Finally, you add the two <code>"</code> at the beginning and end.</p>
| 0 | 2016-10-18T12:21:55Z | [
"python",
"regex",
"expression"
] |
list comprehension does not return empty list | 40,107,819 | <p>I tried to find the relevant question but couldn't find so creating a new one.
My program creates a new list using list comprehension in python as per a simple if condition. </p>
<pre><code> Newone = [ temp for temp in Oldone if temp % 2 != 0 ]
</code></pre>
<p>It works fine but when in some situation it doesn't work. For example this one </p>
<pre><code> Oldone = [1]
Newone = [ temp for temp in Oldone if temp % 2 != 0 ]
</code></pre>
<p>This returns [1] but i am expecting Newone to be [] </p>
| -6 | 2016-10-18T12:06:01Z | 40,107,855 | <pre><code> 1%2 == 1
</code></pre>
<p>So your your condition: <code>temp % 2 != 0</code> is <code>True</code>, therefore it is included in the list. If you want an empty list, you should change it <code>temp % 2 == 0</code>.</p>
| 5 | 2016-10-18T12:07:59Z | [
"python",
"list",
"condition",
"list-comprehension"
] |
list comprehension does not return empty list | 40,107,819 | <p>I tried to find the relevant question but couldn't find so creating a new one.
My program creates a new list using list comprehension in python as per a simple if condition. </p>
<pre><code> Newone = [ temp for temp in Oldone if temp % 2 != 0 ]
</code></pre>
<p>It works fine but when in some situation it doesn't work. For example this one </p>
<pre><code> Oldone = [1]
Newone = [ temp for temp in Oldone if temp % 2 != 0 ]
</code></pre>
<p>This returns [1] but i am expecting Newone to be [] </p>
| -6 | 2016-10-18T12:06:01Z | 40,108,021 | <p>If you are not sure what's happening. Your list comprehension:</p>
<pre><code> Newone = [ temp for temp in Oldone if temp % 2 != 0 ]
</code></pre>
<p>Means; put in my new list <code>Newone</code> all <code>temp</code> values from my existing list <code>Oldone</code>, which satisfy the condition <code>temp % 2 != 0</code> (Essentially keep only odd numbers, since the remainder is 1, whenever an odd number is divided by 2)</p>
| 3 | 2016-10-18T12:16:13Z | [
"python",
"list",
"condition",
"list-comprehension"
] |
Incomplete list pop in list comprehention | 40,108,025 | <p>I have been trying to <code>pop</code> elements in list comprehention using <code>takewhile</code> function and I came into things that is for me hard to understand. My terminal session looks like this:</p>
<p><a href="https://i.stack.imgur.com/yMiBB.png" rel="nofollow"><img src="https://i.stack.imgur.com/yMiBB.png" alt="enter image description here"></a></p>
<p>However when i tried the same thing with strings then problem didn't occur:</p>
<p><a href="https://i.stack.imgur.com/yiA53.png" rel="nofollow"><img src="https://i.stack.imgur.com/yiA53.png" alt="enter image description here"></a></p>
<p>Can someone explain to me that happened in the first scenario? Why <code>g.pop(0)</code> has returned only <code>[1, 2]</code>?</p>
<p>Transcript for copying (why Stack doesn't have collapsible sections ):</p>
<pre><code>>>> from itertools import takewhile
from itertools import takewhile
>>> g = [1,2,3,4,5]
>>> [a for a in takewhile(lambda x: x < 4, g)]
[1, 2, 3]
>>> [g.pop() for _ in takewhile(lambda x: x < 4, g)]
[5, 4, 3]
>>> g = [1,2,3,4,5]
>>> [g.pop(0) for _ in takewhile(lambda x: x < 4, g)]
[1, 2]
>>> g = ['1', '2', '3', '4', '5']
>>> [a for a in takewhile(lambda x: x != '4', g)]
['1', '2', '3']
>>> [g.pop() for _ in takewhile(lambda x: x != '4', g)]
['5', '4', '3']
>>> g = ['1', '2', '3', '4', '5']
>>> [g.pop(0) for _ in takewhile(lambda x: x != '4', g)]
['1', '2', '3']
</code></pre>
| 0 | 2016-10-18T12:16:28Z | 40,108,881 | <p>I figured it out, because i've tried to use <code>deque</code> which raised <code>RuntimeError: deque mutated during iteration</code>.</p>
<p>Execution goes like this:</p>
<ol>
<li><code>g[0] = 1 < 4; g.pop(0) => 1</code></li>
<li><code>g[1] = 3 < 4; g.pop(0) => 2</code></li>
<li><code>g[2] = 5 > 4; break</code></li>
</ol>
<p>This also explains why it worked in 2nd case, because during iteration <code>'4'</code> hasn't been hit.</p>
| 1 | 2016-10-18T12:54:55Z | [
"python",
"python-2.7",
"list-comprehension",
"pop"
] |
How to check if a server is up or not in Python? | 40,108,043 | <p>In PHP I just did: <code>$fp = @fsockopen(irc.myserver.net, 6667, $errno, $errstr, 2);</code></p>
<p>Does Python 2.X also have a function like PHP's <code>fsockopen()</code>? If not how else can I check if a server on port 6667 is up or not?</p>
| 1 | 2016-10-18T12:17:22Z | 40,108,187 | <p>The <a href="https://docs.python.org/2/library/socket.html" rel="nofollow">socket module</a> can be used to simply check if a port is open or not.</p>
<pre><code>import socket;
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('irc.myserver.net',6667))
if result == 0:
print "Port is open"
else:
print "Port is not open"
</code></pre>
| 4 | 2016-10-18T12:24:05Z | [
"python",
"python-2.7"
] |
How to remove a black background from an image and make that transparent using opencv? | 40,108,062 | <p>I am using "PerspectiveTransform" method to transform the image in a given rectangle. Method "warpPerspective" works fine, but the output contains the black background and I want to remove the black color and make that transparent.</p>
<pre><code>import cv2
import numpy as np
img2 = cv2.imread(r"C:\Users\test\Desktop\map.jpg")
input_quad = np.float32([[0,0],[1024,0],[1024,752],[0,752]])
output_quad = np.float32([[4,139],[500,137],[500,650],[159,636]])
lambda_img = np.zeros((728, 992,3), np.uint8)
lambda_img[:,:,:] = 255
lambda_val = cv2.getPerspectiveTransform( input_quad, output_quad )
dst = cv2.warpPerspective(img2,lambda_val,(992,728),lambda_img, cv2.INTER_CUBIC, borderMode=cv2.BORDER_TRANSPARENT)
cv2.imwrite("Valchanged.png",dst)
[enter image description here][1]
[![enter image description here][2]][2]
</code></pre>
<p>Below is the output I have revived. </p>
<p><a href="https://i.stack.imgur.com/9oH53.png" rel="nofollow"><img src="https://i.stack.imgur.com/9oH53.png" alt="enter image description here"></a></p>
| 1 | 2016-10-18T12:18:13Z | 40,108,728 | <p>As your input image is in <code>.jpg</code> format so you need to convert the input image from <code>BGR</code> domain to <code>BGRA</code> domain:</p>
<pre><code>img2 = cv2.imread(r"C:\Users\test\Desktop\map.jpg")
img2 = cv2.cvtColor(img2, cv2.COLOR_BGR2BGRA)
</code></pre>
<p>Also you don't need to define a new <code>lambda_img</code> on your side, <code>cv2.warpPerspective</code> returns a numpy array after applying the transformation. </p>
<pre><code>dst = cv2.warpPerspective(img2, lambda_val, (992,728), flags = cv2.INTER_CUBIC, borderMode=cv2.BORDER_CONSTANT, borderValue = [0, 0, 0, 0])
</code></pre>
<p>When you define <code>borderMode=cv2.BORDER_CONSTANT</code>, you also need to define a <code>borderValue</code> which gets filled along the borders, in this case it is black color with 0 alpha value i.e. Transparent or <code>"#00000000"</code></p>
| 2 | 2016-10-18T12:48:00Z | [
"python",
"opencv"
] |
How to get read/write disk speed in Python? | 40,108,070 | <p>In a Pyhton program I need to get the accumulated read/write speeds of all disks on the host. I was doing it with <code>subprocess.check_output()</code> to call the following Linux shell command:</p>
<pre><code>$ sudo hdparm -t /dev/sda
</code></pre>
<p>This gives as a result:</p>
<pre><code>/dev/sda:
Timing buffered disk reads: 1488 MB in 3.00 seconds = 495.55 MB/sec
</code></pre>
<p>then I can parse the 495.55. OK, so far so good.</p>
<p>But on the man page of <code>hdparm</code> i found this explanation for the <code>-t</code> flag that basically says that when performing measurements no other process should read/write to disk at same time:</p>
<blockquote>
<p>Perform timings of device reads for benchmark and comparison purposes. For meaningful results, this operation should be repeated 2-3 times on an otherwise inactive system (no other active processes) with at least a couple of megabytes of free memory. This displays the speed of reading through the buffer cache to the disk without any prior caching of data. This measurement is an indication of how fast the drive can sustain sequential data reads under Linux, without any filesystem overhead. To ensure accurate measurements, the buffer cache is flushed during the processing of -t using the BLKFLSBUF ioctl.</p>
</blockquote>
<p><strong>The question is</strong>:</p>
<p>How can I ensure that no other process is accessing disk at the same time when measurements are performed?</p>
| 5 | 2016-10-18T12:18:41Z | 40,108,198 | <p>According to <a href="http://unix.stackexchange.com/questions/55212/how-can-i-monitor-disk-io">http://unix.stackexchange.com/questions/55212/how-can-i-monitor-disk-io</a> the most usable solution includes the tool sysstat or iostat (same package).</p>
<p>But seriously, since you have sudo permissions on the host, you can check yourself whether any IO intensive tasks are going on using any of the popular system monitoring tools. You cannot kill all IO effectively without your measurements also going nuts. Over a longer time the measurements should give you reasonable results nonetheless, since the deviations converge towards stable background noise.</p>
<p>Aside from that what would you need artificial measurements for? If you simply want to test the hardware capabilities without any RL context, <strong>do not mount the disk</strong> and test it in binary mode. A measurement while real traffic is going on usually gives you results that are closer to what you can actually expect at load times.</p>
| 3 | 2016-10-18T12:24:27Z | [
"python",
"linux",
"bash",
"performance"
] |
Writing to a JSON file and updating said file | 40,108,274 | <p>I have the following code that will write to a JSON file:</p>
<pre><code>import json
def write_data_to_table(word, hash):
data = {word: hash}
with open("rainbow_table\\rainbow.json", "a+") as table:
table.write(json.dumps(data))
</code></pre>
<p>What I want to do is open the JSON file, add another line to it, and close it. How can I do this without messing with the file?</p>
<p>As of right now when I run the code I get the following:</p>
<pre><code>write_data_to_table("test1", "0123456789")
write_data_to_table("test2", "00123456789")
write_data_to_table("test3", "000123456789")
#<= {"test1": "0123456789"}{"test2": "00123456789"}{"test3": "000123456789"}
</code></pre>
<p>How can I update the file without completely screwing with it?</p>
<p>My expected output would probably be something along the lines of:</p>
<pre><code>{
"test1": "0123456789",
"test2": "00123456789",
"test3": "000123456789",
}
</code></pre>
| 1 | 2016-10-18T12:28:14Z | 40,108,416 | <p>You may read the JSON data with :</p>
<pre><code>parsed_json = json.loads(json_string)
</code></pre>
<p>You now manipulate a classic dictionary. You can add data with :</p>
<pre><code>parsed_json.update({'test4': 0000123456789})
</code></pre>
<p>Then you can write data to a file using :</p>
<pre><code>with open('data.txt', 'w') as outfile:
json.dump(parsed_json, outfile)
</code></pre>
| 4 | 2016-10-18T12:34:48Z | [
"python",
"json",
"python-2.7"
] |
Writing to a JSON file and updating said file | 40,108,274 | <p>I have the following code that will write to a JSON file:</p>
<pre><code>import json
def write_data_to_table(word, hash):
data = {word: hash}
with open("rainbow_table\\rainbow.json", "a+") as table:
table.write(json.dumps(data))
</code></pre>
<p>What I want to do is open the JSON file, add another line to it, and close it. How can I do this without messing with the file?</p>
<p>As of right now when I run the code I get the following:</p>
<pre><code>write_data_to_table("test1", "0123456789")
write_data_to_table("test2", "00123456789")
write_data_to_table("test3", "000123456789")
#<= {"test1": "0123456789"}{"test2": "00123456789"}{"test3": "000123456789"}
</code></pre>
<p>How can I update the file without completely screwing with it?</p>
<p>My expected output would probably be something along the lines of:</p>
<pre><code>{
"test1": "0123456789",
"test2": "00123456789",
"test3": "000123456789",
}
</code></pre>
| 1 | 2016-10-18T12:28:14Z | 40,108,707 | <p>If you are sure the closing "}" is the last byte in the file you can do this:</p>
<pre><code>>>> f = open('test.json', 'a+')
>>> json.dump({"foo": "bar"}, f) # create the file
>>> f.seek(0)
>>> f.read()
'{"foo": "bar"}'
>>> f.seek(-1, 2)
>>> f.write(',\n', f.write(',\n' + json.dumps({"spam": "bacon"})[1:]))
>>> f.seek(0)
>>> print(f.read())
{"foo": "bar",
"spam": "bacon"}
</code></pre>
<p>Since your data is not hierarchical, you should consider a flat format like "TSV".</p>
| 1 | 2016-10-18T12:47:01Z | [
"python",
"json",
"python-2.7"
] |
Connect GAE Remote API to dev_appserver.py | 40,108,508 | <p>I want to execute a Python script that connects to my local dev_appserver.py instance to run some DataStore queries.</p>
<p>The dev_appserver.py is running with:</p>
<pre><code>builtins:
- remote_api: on
</code></pre>
<p>As per <a href="https://cloud.google.com/appengine/docs/python/tools/remoteapi" rel="nofollow">https://cloud.google.com/appengine/docs/python/tools/remoteapi</a> I have:</p>
<pre><code>remote_api_stub.ConfigureRemoteApiForOAuth(
hostname,
'/_ah/remote_api'
)
</code></pre>
<p>In the Python script, but what should the hostname be set to?</p>
<p>For example, when dev_appserver.py started, it prints:</p>
<pre><code>INFO 2016-10-18 12:02:16,850 api_server.py:205] Starting API server at: http://localhost:56700
</code></pre>
<p>But I set the value to localhost:56700, I get the following error:</p>
<pre><code>httplib2.SSLHandshakeError: [SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:590)
</code></pre>
<p>(Same error for any port that has anything running on it - e.g. 8000, 8080, etc).</p>
<p>If anyone has managed to get this to run successfully, what hostname did you use?</p>
<p>Many thanks,
Ned</p>
| 0 | 2016-10-18T12:38:21Z | 40,109,546 | <p>The <code>dev_appserver.py</code> doesn't support SSL (I can't find the doc reference anymore), so it can't answer <code>https://</code> requests.</p>
<p>You could try using http-only URLs (not sure if possible with the remote API - I didn't use it yet, may need to disable handler <code>secure</code> option in <code>app.yaml</code> config files). </p>
<p>At least on my devserver I am able to direct my browser to the http-only API server URL reported by <code>devserver.py</code> at startup and I see <code>{app_id: dev~my_app_name, rtok: '0'}</code>.</p>
<p>Or you could setup a proxy server, see <a href="http://stackoverflow.com/questions/8849020/gae-dev-appserver-py-over-https">GAE dev_appserver.py over HTTPS</a>.</p>
| 1 | 2016-10-18T13:24:14Z | [
"python",
"google-app-engine"
] |
Difficulty with python while installing YouCompleteMe in vim | 40,108,521 | <p>I've followed <a href="https://github.com/Valloric/YouCompleteMe/tree/ddf18cc6ec3bb0108bb89ac366fd74394815f2c6#ubuntu-linux-x64" rel="nofollow">these instructions</a>, in order to install YouCompleteMe in Vim, but when I issue:</p>
<pre><code>./install.py --clang-completer
</code></pre>
<p>The following error message comes up:</p>
<pre><code>Searching Python 2.7 libraries...
ERROR: found static Python library (/usr/local/lib/python2.7/config/libpython2.7.a) but a dynamic one is required. You must use a Python compiled with the --enable-shared flag. If using pyenv, you need to run the command:
export PYTHON_CONFIGURE_OPTS="--enable-shared"
before installing a Python version.
Traceback (most recent call last):
File "./install.py", line 44, in <module>
Main()
File "./install.py", line 33, in Main
subprocess.check_call( [ python_binary, build_file ] + sys.argv[1:] )
File "/usr/local/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', u'/home/anmol/.vim/bundle/YouCompleteMe/third_party/ycmd/build.py', '--clang-completer']' returned non-zero exit status 1
</code></pre>
<p>and now I'm stuck, what should I do?</p>
| 0 | 2016-10-18T12:39:00Z | 40,112,214 | <p>The plugin builds for me on the same operating system. The relevant line from the configuration looks like this:</p>
<pre><code>Found PythonLibs: /usr/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so
</code></pre>
<p>The shared object can be identified as belonging to <code>libpython2.7</code> package:</p>
<pre><code>apt-file search /usr/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so
libpython2.7: /usr/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so
</code></pre>
<p>So I would check if you have the file named, if not try <code>sudo apt install libpython2.7</code>, and otherwise try moving away the static version, or let us know how you installed Python.</p>
| 0 | 2016-10-18T15:26:30Z | [
"python",
"linux",
"python-2.7",
"ubuntu",
"vim"
] |
Difficulty with python while installing YouCompleteMe in vim | 40,108,521 | <p>I've followed <a href="https://github.com/Valloric/YouCompleteMe/tree/ddf18cc6ec3bb0108bb89ac366fd74394815f2c6#ubuntu-linux-x64" rel="nofollow">these instructions</a>, in order to install YouCompleteMe in Vim, but when I issue:</p>
<pre><code>./install.py --clang-completer
</code></pre>
<p>The following error message comes up:</p>
<pre><code>Searching Python 2.7 libraries...
ERROR: found static Python library (/usr/local/lib/python2.7/config/libpython2.7.a) but a dynamic one is required. You must use a Python compiled with the --enable-shared flag. If using pyenv, you need to run the command:
export PYTHON_CONFIGURE_OPTS="--enable-shared"
before installing a Python version.
Traceback (most recent call last):
File "./install.py", line 44, in <module>
Main()
File "./install.py", line 33, in Main
subprocess.check_call( [ python_binary, build_file ] + sys.argv[1:] )
File "/usr/local/lib/python2.7/subprocess.py", line 540, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['/usr/local/bin/python', u'/home/anmol/.vim/bundle/YouCompleteMe/third_party/ycmd/build.py', '--clang-completer']' returned non-zero exit status 1
</code></pre>
<p>and now I'm stuck, what should I do?</p>
| 0 | 2016-10-18T12:39:00Z | 40,131,786 | <p>I checked YouCompleteMe's build system and it uses a custom build script that uses the Python module <code>distutils</code> to find the paths to Python's library and include directories. Your <code>/usr/local/</code> installation of Python is probably included in your <code>PATH</code> variable before the official <code>/usr</code> installation so just running <code>python</code> probably runs your custom installation, making <code>distutils</code> return its directories.</p>
<p>To check whether this is true, try running <code>which python</code>. I assume it will return something like <code>/usr/local/bin/python</code>.</p>
<p>At this point, I see several options.</p>
<ol>
<li>Try running YCM's install script by specifying which Python executable should run it explicitly: <code>/usr/bin/python ./install.py --clang-completer</code></li>
<li><p>Edit the script <code>third_party/ycmd/build.py</code> in YouCompleteMe's plugin directory to hardcode the paths for your custom Python installation. For instance, you could replace the existing <code>FindPythonLibraries</code> function with the following:</p>
<pre class="lang-py prettyprint-override"><code>def FindPythonLibraries():
return ('/usr/lib/python2.7/config-x86_64-linux-gnu/libpython2.7.so',
'/usr/include/python2.7')
</code></pre>
<p>Note that this will make it harder to update YouCompleteMe since you'll have to ensure it doesn't get overwritten when you update its source.</p></li>
<li>Update your custom installation of Python with one built as a shared library. The details of this will depend on how you installed this Python version in the first place. You can check whether you installed it through a package by using <code>dpkg -S /usr/local/lib/python2.7/config/libpython2.7.a</code>. This command will tell you which package installed that file, unless you installed it manually (bypassing the package manager).</li>
<li>Remove your custom <code>/usr/local</code> Python installation while ensuring you have a Python from the official repositories installed (packages <code>python2.7</code> and <code>libpython2.7</code>).</li>
</ol>
<p>In the long run, you would probably be better off by using the official Python packages.</p>
| 0 | 2016-10-19T12:45:48Z | [
"python",
"linux",
"python-2.7",
"ubuntu",
"vim"
] |
How to resize an image while uploading and saving it automaticly django? | 40,108,544 | <p>I'm unable to save the image that is either being resized or uploading directly through my <code>admin</code> panel. I want to resize it through <code>PLP</code> or any other way!</p>
<pre><code>def get_product_image_folder(instance, filename):
return "static/images/product/%s/base/%s" %(instance.product_id, filename)
product_image = StringIO.StringIO(i.read())
imageImage = Image.open(product_image)
thumbImage = imageImage.resize((100,100))
thumbfile = StringIO()
thumbImage.save(thumbfile, "JPEG")
thumbcontent = ContentFile(thumbfile.getvalue())
newphoto.thumb.save(filename, thumbcontent)
new_photo.save()
</code></pre>
| 0 | 2016-10-18T12:40:08Z | 40,115,339 | <p>This can be done either in the save method of the model, the save_model method of the admin, or the save method of the form.</p>
<p>I recommend the last one, since it lets you decouple form/validation logic from the model and admin interface. </p>
<p>This could look something like the following:</p>
<pre><code>class MyForm(forms.ModelForm):
model = MyModel
...
def save(self, *args, **options):
if self.cleaned_data.get("image_field"):
image = self.cleaned_data['image_field']
image = self.resize_image(image)
self.cleaned_data['image_field'] = image
super(MyForm, self).save(*args, **options)
def resize_image(self, image):
filepath = image.file.path
pil_image = PIL.Image.open(filepath)
resized_image = # **similar steps to what you have in your question
return resized_image
</code></pre>
<p>You can either put this new image in the cleaned_data dictionary so that it saves itself, or you can save it to a new field (something like "my_field_thumbnail") that has editable=False on the model.</p>
<p>More info on the actual process of resizing an image with PIL can be found in other SO questions, eg:
<a href="http://stackoverflow.com/questions/273946/how-do-i-resize-an-image-using-pil-and-maintain-its-aspect-ratio">How do I resize an image using PIL and maintain its aspect ratio?</a></p>
| 0 | 2016-10-18T18:20:05Z | [
"python",
"django"
] |
validate the size and format if uploaded image and resize it in django | 40,108,553 | <p>I am uploading an image in django, i want to validate it's format and size in forms.py</p>
<pre><code>class CreateEventStepFirstForm(forms.Form):
user_image = forms.ImageField(required = True, widget=forms.FileInput(attrs={
'class' : 'upload-img',
'data-empty-message':'Please upload artist image, this field is required'
}))
</code></pre>
<p>While uploading this image i want to first validate it's format, form allows user only to upload png and jpeg image and also user will have to upload an image upto 700*500 dimensions, if image is lower than this dimensions, then this form should not be validated, and if image is greater than 1200*1000 pixels, in this case it should resize image to 700*500 without affecting the image quality.</p>
<p>View i am using for uploading file is :-</p>
<pre><code>def create_new_event(request, steps):
if request.method == 'POST':
stepFirstForm = CreateEventStepFirstForm(request.POST, request.FILES)
if stepFirstForm.is_valid():
myfile = request.FILES['user_image']
fs = FileSystemStorage()
filename = fs.save('event_artists_images/'+myfile.name, myfile)
uploaded_file_url = fs.url(filename)
return render(request, 'home/create-new-event.html', {'stepFirstForm':stepFirstForm})
</code></pre>
| 2 | 2016-10-18T12:40:17Z | 40,108,698 | <p>You should look at writing your own custom validator. You can read about them here in the <a href="https://docs.djangoproject.com/en/1.10/ref/validators/" rel="nofollow">documentation</a> </p>
<p>Once you've created a validator that checks against those values then you can attach it to the form in a couple of different ways.</p>
| 0 | 2016-10-18T12:46:48Z | [
"python",
"django"
] |
Python running as Windows Service: OSError: [WinError 6] The handle is invalid | 40,108,816 | <p>I have a Python script, which is running as a Windows Service. The script forks another process with:</p>
<pre><code>with subprocess.Popen( args=[self.exec_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) as proc:
</code></pre>
<p>which causes the following error:</p>
<pre><code>OSError: [WinError 6] The handle is invalid
File "C:\Program Files (x86)\Python35-32\lib\subprocess.py", line 911, in __init__
File "C:\Program Files (x86)\Python35-32\lib\subprocess.py", line 1117, in _get_handles
</code></pre>
| 0 | 2016-10-18T12:51:41Z | 40,108,817 | <p>Line 1117 in <code>subprocess.py</code> is:</p>
<pre><code>p2cread = _winapi.GetStdHandle(_winapi.STD_INPUT_HANDLE)
</code></pre>
<p>which made me suspect that service processes do not have a STDIN associated with them (TBC)</p>
<p>This troublesome code can be avoided by supplying a file or null device as the stdin argument to <code>popen</code>.</p>
<p>In <strong>Python 3.3, 3.4, and 3.5</strong>, you can simply pass <code>stdin=subprocess.DEVNULL</code>. E.g.</p>
<pre><code>subprocess.Popen( args=[self.exec_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=subprocess.DEVNULL)
</code></pre>
<p>In <strong>Python 2.x</strong>, you need to get a filehandler to null, then pass that to popen:</p>
<pre><code>devnull = open(os.devnull, 'wb')
subprocess.Popen( args=[self.exec_path], stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=devnull)
</code></pre>
| 2 | 2016-10-18T12:51:41Z | [
"python",
"windows",
"subprocess"
] |
explode array of array- (Dataframe) pySpark | 40,108,822 | <p>I have a dataframe like this:</p>
<pre><code> +-----+--------------------+
|index| merged|
+-----+--------------------+
| 0|[[2.5, 2.4], [3.5...|
| 1|[[-1.0, -1.0], [-...|
| 2|[[-1.0, -1.0], [-...|
| 3|[[0.0, 0.0], [0.5...|
| 4|[[0.5, 0.5], [1.0...|
| 5|[[0.5, 0.5], [1.0...|
| 6|[[-1.0, -1.0], [0...|
| 7|[[0.0, 0.0], [0.5...|
| 8|[[0.5, 0.5], [1.0...|
+-----+--------------------+
</code></pre>
<p>And I want to explode the merged column into </p>
<pre><code>+-----+-------+-------+
|index|Column1|Column2|
+-----+-------+-------+
| 0| 2.5| 2.4 |
| 1| 3.5| 0.5|
| 2| -1.0| -1.0|
| 3| -1.0| -1.0|
| 4| 0.0 | 0.0 |
| 5| 0.5| 0.74|
+-----+-------+-------+
</code></pre>
<p>Each tuple [[2.5, 2.4], [3.5,0,5]] repensente two columns, knowing that 2,5 and 3,5 will be stored in column 1 and (2.4,0,5) will be stored in second Column </p>
<p>So I tried this</p>
<pre><code>df= df.withColumn("merged", df["merged"].cast("array<array<float>>"))
df= df.withColumn("merged",explode('merged'))
</code></pre>
<p>then i will apply a udf to create another DF</p>
<p>but i can't cast the data or apply explode, and I received the error</p>
<pre><code>pyspark.sql.utils.AnalysisException: u"cannot resolve 'cast(merged as array<array<float>)' due to data type mismatch: cannot cast StringType to ArrayType(StringType,true)
</code></pre>
<p>I tried also</p>
<pre><code>df= df.withColumn("merged", df["merged"].cast("array<string>"))
</code></pre>
<p>but nothing works
and if I apply explode without cast, I receive </p>
<pre><code>pyspark.sql.utils.AnalysisException: u"cannot resolve 'explode(merged)' due to data type mismatch: input to function explode should be array or map type, not StringType;
</code></pre>
| 1 | 2016-10-18T12:52:00Z | 40,109,388 | <p>You could try the following code:</p>
<pre><code>from pyspark import SparkConf, SparkContext
from pyspark.sql import SparkSession
from pyspark.sql.types import FloatType, StringType, IntegerType
from pyspark.sql.functions import udf, col
def col1_calc(merged):
return merged[0][0]
def col2_calc(merged):
return merged[0][1]
if __name__ == '__main__':
spark = SparkSession \
.builder \
.appName("Python Spark SQL Hive integration example") \
.getOrCreate()
df = spark.createDataFrame([
(0, [[2.5,2.4],[3.5]]),
(1, [[-1.0,-1.0],[3.5]]),
(2, [[-1.0,-1.0],[3.5]]),
], ["index", "merged"])
df.show()
column1_calc = udf(col1_calc, FloatType())
df = df.withColumn('Column1', column1_calc(df['merged']))
column2_calc = udf(col2_calc, FloatType())
df = df.withColumn('Column2', column2_calc(df['merged']))
df = df.select(['Column1', 'Column2', 'index'])
df.show()
</code></pre>
<p>Output:</p>
<pre><code>+-------+-------+-----+
|Column1|Column2|index|
+-------+-------+-----+
| 2.5| 2.4| 0|
| -1.0| -1.0| 1|
| -1.0| -1.0| 2|
+-------+-------+-----+
</code></pre>
| 0 | 2016-10-18T13:17:09Z | [
"python",
"apache-spark",
"pyspark",
"spark-dataframe"
] |
Error 'cannot import name post_revision_commit' | 40,108,833 | <p>Hi everyone I move my project, to a server I now I try to load the database
<code>python manage.py loaddata resource/ddbb/20160817_db.json</code>
or even run the server but I obtain this error.</p>
<pre><code>File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/core/management/__init__.py", line 354, in execute_from_command_line
utility.execute()
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/core/management/__init__.py", line 328, in execute
django.setup()
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/apps/config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/models/__init__.py", line 3, in <module>
from .pagemodel import * # nopyflakes
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/models/pagemodel.py", line 1453, in <module>
_reversion()
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/models/pagemodel.py", line 1449, in _reversion
exclude_fields=exclude_fields
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/utils/helpers.py", line 135, in reversion_register
from cms.utils import reversion_hacks
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/utils/reversion_hacks.py", line 18, in <module>
from reversion.models import Revision, Version, post_revision_commit # NOQA # nopyflakes
ImportError: cannot import name post_revision_commit
</code></pre>
<p>This is my installation in my environment on the server</p>
<pre><code>cmsplugin-filer==1.1.3
dj-database-url==0.4.1
Django==1.8.15
django-appconf==1.0.2
django-classy-tags==0.8.0
django-cms==3.4.1
django-filer==1.2.5
django-formtools==1.0
django-mptt==0.8.6
django-polymorphic==0.8.1
django-reversion==2.0.6
django-sekizai==0.10.0
Django-Select2==4.3.2
django-treebeard==4.0.1
djangocms-admin-style==1.2.5
djangocms-attributes-field==0.1.1
djangocms-column==1.7.0
djangocms-googlemap==0.5.2
djangocms-inherit==0.2.2
djangocms-installer==0.9.1
djangocms-link==2.0.1
djangocms-snippet==1.9.1
djangocms-style==1.7.0
djangocms-text-ckeditor==3.3.0
djangocms-video==2.0.2
djangorestframework==3.4.7
easy-thumbnails==2.3
feedparser==5.2.1
html5lib==0.9999999
MySQL-python==1.2.5
Pillow==3.4.1
pytz==2016.7
six==1.10.0
tzlocal==1.3
Unidecode==0.4.19
</code></pre>
<p>Any idea How I can solvent this problem.</p>
| 0 | 2016-10-18T12:52:39Z | 40,109,026 | <p>It looks like the <code>django-cms</code> version you are using doesn't support <code>django-reversion</code> 2.0+. The comments in the <a href="https://github.com/divio/django-cms/blob/3.4.1/cms/utils/reversion_hacks.py#L3" rel="nofollow">django-cms source</a> seem to affirm this. I would try installing the latest 1.x version of <code>django-reversion</code> and see if that doesn't work. </p>
| 1 | 2016-10-18T13:01:28Z | [
"python",
"django"
] |
Error 'cannot import name post_revision_commit' | 40,108,833 | <p>Hi everyone I move my project, to a server I now I try to load the database
<code>python manage.py loaddata resource/ddbb/20160817_db.json</code>
or even run the server but I obtain this error.</p>
<pre><code>File "manage.py", line 10, in <module>
execute_from_command_line(sys.argv)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/core/management/__init__.py", line 354, in execute_from_command_line
utility.execute()
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/core/management/__init__.py", line 328, in execute
django.setup()
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/apps/registry.py", line 108, in populate
app_config.import_models(all_models)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/django/apps/config.py", line 198, in import_models
self.models_module = import_module(models_module_name)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/models/__init__.py", line 3, in <module>
from .pagemodel import * # nopyflakes
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/models/pagemodel.py", line 1453, in <module>
_reversion()
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/models/pagemodel.py", line 1449, in _reversion
exclude_fields=exclude_fields
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/utils/helpers.py", line 135, in reversion_register
from cms.utils import reversion_hacks
File "/home/mxp1217/django1101/lib/python2.7/site-packages/cms/utils/reversion_hacks.py", line 18, in <module>
from reversion.models import Revision, Version, post_revision_commit # NOQA # nopyflakes
ImportError: cannot import name post_revision_commit
</code></pre>
<p>This is my installation in my environment on the server</p>
<pre><code>cmsplugin-filer==1.1.3
dj-database-url==0.4.1
Django==1.8.15
django-appconf==1.0.2
django-classy-tags==0.8.0
django-cms==3.4.1
django-filer==1.2.5
django-formtools==1.0
django-mptt==0.8.6
django-polymorphic==0.8.1
django-reversion==2.0.6
django-sekizai==0.10.0
Django-Select2==4.3.2
django-treebeard==4.0.1
djangocms-admin-style==1.2.5
djangocms-attributes-field==0.1.1
djangocms-column==1.7.0
djangocms-googlemap==0.5.2
djangocms-inherit==0.2.2
djangocms-installer==0.9.1
djangocms-link==2.0.1
djangocms-snippet==1.9.1
djangocms-style==1.7.0
djangocms-text-ckeditor==3.3.0
djangocms-video==2.0.2
djangorestframework==3.4.7
easy-thumbnails==2.3
feedparser==5.2.1
html5lib==0.9999999
MySQL-python==1.2.5
Pillow==3.4.1
pytz==2016.7
six==1.10.0
tzlocal==1.3
Unidecode==0.4.19
</code></pre>
<p>Any idea How I can solvent this problem.</p>
| 0 | 2016-10-18T12:52:39Z | 40,109,269 | <p>You should be on latest <code>djnago-reversion</code>. Because <code>post_revision_commit</code> signal has been removed since 2.0.0 and added back in the latest version. <a href="http://django-reversion.readthedocs.io/en/latest/changelog.html?highlight=post_revision_commit#signals" rel="nofollow">Reference</a></p>
| 1 | 2016-10-18T13:11:24Z | [
"python",
"django"
] |
java script html element scrape using a scrapy on phython 2.7.11 i get like these | 40,108,890 | <pre><code>[root@Imx8 craigslist_sample]# scrapy crawl spider
/root/Python-2.7.11/craigslist_sample/craigslist_sample/spiders/test.py:1: ScrapyDeprecationWarning: Module `scrapy.spider` is deprecated, use `scrapy.spiders` instead
from scrapy.spider import BaseSpider
/root/Python-2.7.11/craigslist_sample/craigslist_sample/spiders/test.py:6: ScrapyDeprecationWarning: craigslist_sample.spiders.test.MySpider inherits from deprecated class scrapy.spiders.BaseSpider, please inherit from scrapy.spiders.Spider. (warning only on first subclass, there may be others)
class MySpider(BaseSpider):
2016-10-18 18:23:30 [scrapy] INFO: Scrapy 1.2.0 started (bot: craigslist_sample)
2016-10-18 18:23:30 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'craigslist_sample.spiders', 'SPIDER_MODULES': ['craigslist_sample.spiders'], 'ROBOTSTXT_OBEY': True, 'BOT_NAME': 'craigslist_sample'}
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 11, in <module>
sys.exit(execute())
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 142, in execute
_run_print_help(parser, _run_command, cmd, args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 88, in _run_print_help
func(*a, **kw)
File "/usr/local/lib/python2.7/site-packages/scrapy/cmdline.py", line 149, in _run_command
cmd.run(args, opts)
File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 162, in crawl
crawler = self.create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 190, in create_crawler
return self._create_crawler(crawler_or_spidercls)
File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 194, in _create_crawler
spidercls = self.spider_loader.load(spidercls)
File "/usr/local/lib/python2.7/site-packages/scrapy/spiderloader.py", line 43, in load
raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: spider'
</code></pre>
| -1 | 2016-10-18T12:55:25Z | 40,109,116 | <p>your should set name='spider' in craigslist_sample/craigslist_sample/spiders/test.py </p>
<pre><code>class MySpider(Spider):
name = 'spider'
def parse(self,response):
#....
</code></pre>
| 0 | 2016-10-18T13:05:17Z | [
"python",
"scrapy",
"pycurl"
] |
Python function: variable and string | 40,108,906 | <p>I have following formula to check (Thanks for helping me on <a href="http://stackoverflow.com/questions/40102772/python-applying-function-to-list-of-tems/40102936?noredirect=1#comment67478880_40102936">this</a>!).</p>
<pre><code>queries = ['dog','cat','hamster']
def get_trends(queries):
return pd.concat([pytrend.trend({'q': x, 'date': '01/2015 12m'}, return_type='dataframe')
for x in queries], axis=1)
get_trends(queries)
</code></pre>
<p>This function fires a Google Trends query for each item in the list and puts the returning dataframes next to each other. What I need to do now is to do exactly the same, but have each one static variable (pet) in the query.</p>
<p>For example, a query without the formula would be</p>
<pre><code>return pytrend.trend({'q': 'pet, dog', 'date': '01/2015 12m'}, return_type='dataframe')
</code></pre>
<p>I know I could try</p>
<pre><code>queries = ['pet, dog','pet, cat','pet, hamster']
</code></pre>
<p>But maybe there's a more elegant way?</p>
<p>I tried </p>
<pre><code>static =['pet']
return pytrend.trend({'q': ''' + static + x + ''', 'date': '01/2015 12m'}, return_type='dataframe')
</code></pre>
<p>but wasn't successful with that.</p>
| 0 | 2016-10-18T12:55:44Z | 40,109,680 | <p>You can do it this way:</p>
<pre><code>In [54]: %paste
static = 'animals'
animals = ['dog','cat','hamster']
queries = ['{}, {}'.format(static, x) for x in animals]
## -- End pasted text --
In [55]: queries
Out[55]: ['animals, dog', 'animals, cat', 'animals, hamster']
</code></pre>
<p>now you can pass <code>queries</code> to your function:</p>
<pre><code>get_trends(queries)
</code></pre>
| 0 | 2016-10-18T13:29:38Z | [
"python",
"string",
"list",
"function"
] |
Display Sum of overdue payments in Customer Form view for each customer | 40,109,065 | <p>In accounting -> Customer Invoices, there is a filter called <code>Overdue</code>. Now I want to calculate the overdue payments per user and then display it onto the customer form view.
I just want to know how can we apply the condition of <strong>filter</strong> in python code. I have already defined a smart button to display it with a (total invoice value) by inheriting account.invoice.</p>
<p>"Overdue" filter in invoice search view:</p>
<p><code>['&', ('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')]</code></p>
| 0 | 2016-10-18T13:02:57Z | 40,124,695 | <p>Your smart button on partners should use a new action, like the button for customer or vendor bills. This button definition should include <code>context="{'default_partner_id': active_id}</code> which will allow to change the partner filter later on, or the upcoming action definition should include the partner in its domain.
The action should be for model <code>account.invoice</code> and have to have the following domain:
<code>[('date_due', '<', time.strftime('%Y-%m-%d')), ('state', '=', 'open')]</code></p>
<p>If you want to filter only outgoing (customer invoices) add a filter tuple for field <code>type</code>.</p>
| 1 | 2016-10-19T07:19:14Z | [
"python",
"openerp",
"odoo-9"
] |
I want to make a variable in my views.py which changes depending the name of the urlpattern used | 40,109,185 | <p>Here's my code. whatever urlpattern is chosen: I want the name of it to be stored as <code>url</code> in views.py. Which is then used in queryset filter().</p>
<p><strong>urls.py</strong></p>
<pre><code>url(r'^news/', BoxesView.as_view(), name='news'),
url(r'^sport/', BoxesView.as_view(), name='sport'),
url(r'^cars/', BoxesView.as_view(), name='cars'),
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>class BoxesView(ListView):
url = #urlname to go here
def get_queryset(self):
queryset_list = Post.objects.all().filter(category=url)
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>category = models.CharField(choices=CATEGORY_CHOICES)
</code></pre>
<p><strong>choices.py</strong></p>
<pre><code>CATEGORY_CHOICES = (
('1', 'news'),
('2', 'sport'),
('3', 'cars'),
)
</code></pre>
<p>Any idea?</p>
| 0 | 2016-10-18T13:07:41Z | 40,109,729 | <p>I would replace your url.py by something like this:</p>
<pre><code>url(r'(?P<keyword>\w+)/$', BoxesView.as_view())
</code></pre>
<p>This changes your address into an url parameter which you can then access the in your methods like this:</p>
<pre><code>def get_queryset(self):
url = self.kwargs['keyword']
queryset_list = Post.objects.all().filter(category=url)
</code></pre>
| 1 | 2016-10-18T13:31:56Z | [
"python",
"django",
"django-views"
] |
I want to make a variable in my views.py which changes depending the name of the urlpattern used | 40,109,185 | <p>Here's my code. whatever urlpattern is chosen: I want the name of it to be stored as <code>url</code> in views.py. Which is then used in queryset filter().</p>
<p><strong>urls.py</strong></p>
<pre><code>url(r'^news/', BoxesView.as_view(), name='news'),
url(r'^sport/', BoxesView.as_view(), name='sport'),
url(r'^cars/', BoxesView.as_view(), name='cars'),
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>class BoxesView(ListView):
url = #urlname to go here
def get_queryset(self):
queryset_list = Post.objects.all().filter(category=url)
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>category = models.CharField(choices=CATEGORY_CHOICES)
</code></pre>
<p><strong>choices.py</strong></p>
<pre><code>CATEGORY_CHOICES = (
('1', 'news'),
('2', 'sport'),
('3', 'cars'),
)
</code></pre>
<p>Any idea?</p>
| 0 | 2016-10-18T13:07:41Z | 40,110,563 | <p>You can use this to get the name of the view</p>
<pre><code> url = resolve(self.request.path_info).url_name
</code></pre>
<p>UPDATE: Added "self." which is needed when using generic views. And don't forget to import:</p>
<pre><code> from django.core.urlresolvers import resolve
</code></pre>
| 0 | 2016-10-18T14:11:28Z | [
"python",
"django",
"django-views"
] |
%s showing strange behavior in regex | 40,109,204 | <p>I have a string in which I want to find some words preceding a parenthesis. Lets say the string is - </p>
<blockquote>
<p>'there are many people in the world having colorectal cancer (crc) who also have the depression syndrome (ds)'</p>
</blockquote>
<p>I want to capture at most 5 words before a parenthesis. I have a list <code>acronym_list</code> of abbreviations which are inside the brackets - <code>[(crc), (ds)]</code>. So I am using the following code - </p>
<pre><code>acrolen=5
rt=[]
for acro in acronym_list:
find_words= re.findall('((?:\w+\W+){1,%d}%s)' %(acrolen, acro), text, re.I)
for word in find_words:
rt.append(word)
print rt
</code></pre>
<p>But this gives this result - </p>
<pre><code>('the world having colorectal cancer (crc', 'crc')
('also have the depression syndrome (ds', 'ds')
</code></pre>
<p>Whereas if I use the regex -</p>
<p><code>find_words= re.findall('((?:\w+\W+){1,%d}\(crc\))' %(acrolen),s, re.I)</code></p>
<p>Then it is able to find exactly what I want i.e. - </p>
<pre><code>the world having colorectal cancer (crc)
</code></pre>
<p>The question is - why using <code>%s</code> for the string here causing the regex match to be so vastly different (having unnecessary brackets around it, repeating the acronym etc..)</p>
<p>How can I use the 1st regex properly so that I can automate the process using a loop rather than having to enter the exact string every time in the regex ?</p>
| 1 | 2016-10-18T13:08:35Z | 40,109,914 | <p>You need to make sure the variables you pass are escaped correctly so as to be used as literal text inside a regex pattern. Use <code>re.escape(acro)</code>:</p>
<pre><code>import re
text = "there are many people in the world having colorectal cancer (crc) who also have the depression syndrome (ds)"
acrolen=5
rt=[]
acronym_list = ["(crc)", "(ds)"]
for acro in acronym_list:
p = r'((?:\w+\W+){1,%d}%s)' %(acrolen, re.escape(acro))
# Or, use format:
# p = r'((?:\w+\W+){{1,{0}}}{1})'.format(acrolen, re.escape(acro))
find_words= re.findall(p, text, re.I)
for word in find_words:
rt.append(word)
print rt
</code></pre>
<p>See the <a href="https://ideone.com/9lwLYm" rel="nofollow">Python demo</a></p>
<p>Also, note you do not need to enclose the whole pattern with a capturing group, <code>re.findall</code> will return match values if no capturing group is defined in the pattern.</p>
<p>It is also recommended to use raw string literals when defining regex patterns to avoid ambiguous situations.</p>
| 1 | 2016-10-18T13:40:54Z | [
"python",
"regex"
] |
How to retry before an exception with Eclipse/PyDev | 40,109,228 | <p>I am using Eclipse + PyDev, although I can break on exception using PyDev->Manage Exception Breakpoints, I am unable to continue the execution after the exception.</p>
<p>What I would like to be able to do is to set the next statement before the exception so I can run a few commands in the console window and continue execution. If I use Eclipse -> Run -> Set Next Statement before the exception, the editor will show the next statement being where I set it but then when resuming the execution, the program will be terminated.</p>
<p>Can this be done ?</p>
| 1 | 2016-10-18T13:09:30Z | 40,130,262 | <p>Unfortunately no, this is a Python restriction on setting the next line to be executed: it can't set the next statement after an exception is thrown (it can't even go to a different block -- i.e.: if you're inside a try..except, you can't set the next statement to be out of that block).</p>
<p>You could in theory take a look at Python itself as it's open source and see how it handles that and make it more generic to handle your situation, but apart from that, what you want is not doable.</p>
| 1 | 2016-10-19T11:32:25Z | [
"python",
"eclipse",
"pydev"
] |
Django creating wrong type for fields in intermediate table (manytomany) | 40,109,257 | <p>I have a model in Django which the <code>pk</code> is not an integer and it has a field which is a <code>manytomany</code>. This <code>manytomany</code> references the model itself.</p>
<p>When I ran <code>makemigration</code> I didn't realize, but it did not create the fields in the intermediate table as <code>char(N)</code>. In fact, it create as an <code>integer</code>. </p>
<pre><code># models.py
class Inventory(models.Model):
sample_id = models.CharField(max_length=50, primary_key=True)
parent_id = models.ManyToManyField("self")
</code></pre>
<p>This throws errors whenever I try to add objects to my parent model</p>
<pre><code>>>> p = Inventory.objects.get(sample_id='sample01')
>>> child = Inventory.objects.get(sample_id='sample02')
>>> p.parent_id.add(child)
</code></pre>
<p>I get the error</p>
<pre><code>psycopg2.DataError: invalid input syntax for integer: "sample02"
LINE 1: ...HERE ("inventory_parent_id"."to_inventory_id" IN ('sample...
</code></pre>
<p>I saw the fields in the intermediate table, <code>inventory_parent_id</code>, created by Django and their types are not correct.</p>
<pre><code>Columns (3)
|--id (integer)
|--from_inventory_id (integer)
|--to_inventory_id (integer)
</code></pre>
<p><strong>My questions are</strong>: Is it bad if I change the types manually? Will it break the migrations? Or did I have to do something so Django can catch this misleading type?</p>
| 1 | 2016-10-18T13:10:55Z | 40,109,488 | <p>Try to re-create migrations (unapply migrations by using <code>./manage.py migrate YOURAPP PREVIOUS_MIGRATION_NUMBER</code> or <code>./manage.py migrate YOURAPP zero</code> if it's initial migration), remove migration file (don't forget about <code>.pyc</code> file) and generate it again.</p>
<p>If that doesn't help, you can try to create custom through table with proper migration fields and then recreate that migration.</p>
| 0 | 2016-10-18T13:21:14Z | [
"python",
"django",
"django-models",
"manytomanyfield"
] |
Spark Dataframe-Python | 40,109,274 | <p>In pandas, I can successfully run the following:</p>
<pre><code>def car(t)
if t in df_a:
return df_a[t]/df_b[t]
else:
return 0
</code></pre>
<p>But how can I do the exact same thing with spark dataframe?Many thanks!<br>
The data is like this</p>
<pre><code>df_a
a 20
b 40
c 60
df_b
a 80
b 50
e 100
</code></pre>
<p>The result should be 0.25 when input car(a)</p>
| 0 | 2016-10-18T13:11:42Z | 40,113,481 | <p>First you have to <code>join</code> both dataframes, then you have to <code>filter</code> by the letter you want and <code>select</code> the operation you need.</p>
<pre class="lang-py prettyprint-override"><code>df_a = sc.parallelize([("a", 20), ("b", 40), ("c", 60)]).toDF(["key", "value"])
df_b = sc.parallelize([("a", 80), ("b", 50), ("e", 100)]).toDF(["key", "value"])
def car(c):
return df_a.join(df_b, on=["key"]).where(df_a["key"] == c).select((df_a["value"] / df_b["value"]).alias("ratio")).head()
car("a")
# Row(ratio=0.25)
</code></pre>
| 3 | 2016-10-18T16:27:53Z | [
"python",
"apache-spark"
] |
How do I link python 3.4.3 to opencv? | 40,109,379 | <p>So I have OpenCV on my computer all sorted out, I can use it in C/C++ and the Python 2.7.* that came with my OS.</p>
<p>My computer runs on Linux Deepin and whilst I usually use OpenCV on C++, I need to use Python 3.4.3 for some OpenCV tasks.</p>
<p>Problem is, I've installed python 3.4.3 now but whenever I try to run an OpenCV program on it, it doesn't recognize numpy or cv2, the modules I need for OpenCV. I've already built and installed OpenCV and I'd rather not do it again</p>
<p>Is there some way I can link my new Python 3.4.3 environment to numpy and the opencv I already built so I can use OpenCV on Python 3.4.3?</p>
<p>Thanks in advance</p>
| 1 | 2016-10-18T13:16:42Z | 40,109,662 | <p>You can try:</p>
<ol>
<li>Download the OpenCV module</li>
<li>Copy the ./opencv/build/python/3.4/x64/cv2.pyd file </li>
<li>To the python installation directory path: ./Python34/Lib/site-packages.</li>
</ol>
<p>I hope this helps</p>
| 0 | 2016-10-18T13:29:17Z | [
"python",
"python-2.7",
"python-3.x",
"opencv",
"numpy"
] |
Django AppsNotLoaded | 40,109,400 | <p>I'm trying to make a python script to put some things in my database;</p>
<pre><code>from django.conf import settings
settings.configure()
import django.db
from models import Hero #Does not work..?
heroes = [name for name in open('hero_names.txt').readlines()]
names_in_db = [hero.hero_name for hero in Hero.objects.all()] #ALready existing heroes
for heroname in heroes:
if heroname not in names_in_db:
h = Hero(hero_name=heroname, portraid_link='/static/heroes/'+heroname)
h.save()
</code></pre>
<p>The import throws the following</p>
<pre><code>Traceback (most recent call last):
File "heroes_to_db.py", line 4, in <module>
from models import Hero
File "C:\Users\toft_\Desktop\d2-patchnotes-master\dota2notes\patch\models.py", line 5, in <module>
class Hero(models.Model):
File "C:\Python27\lib\site-packages\django\db\models\base.py", line 105, in __new__
app_config = apps.get_containing_app_config(module)
File "C:\Python27\lib\site-packages\django\apps\registry.py", line 237, in get_containing_app_config
self.check_apps_ready()
File "C:\Python27\lib\site-packages\django\apps\registry.py", line 124, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>I know I can do <code>python manage.py --shell</code> and write the code for hand but to be honest, I dont want to. What am I missing ?</p>
| 0 | 2016-10-18T13:17:49Z | 40,109,851 | <p>Django must configure all installed applications before you can use any models. To do this you must call <code>django.setup()</code></p>
<pre><code>import django
django.setup()
</code></pre>
<p><a href="https://docs.djangoproject.com/en/1.10/ref/applications/#how-applications-are-loaded" rel="nofollow">From the documentation:</a></p>
<blockquote>
<p>This function is called automatically:</p>
<ul>
<li>When running an HTTP server via Djangoâs WSGI support.</li>
<li>When invoking a management command.</li>
</ul>
<p>It must be called explicitly in other cases, for instance in plain Python scripts.</p>
</blockquote>
| 1 | 2016-10-18T13:37:51Z | [
"python",
"django"
] |
VirtualBox command works correct in bash, but does not work in nginx | 40,109,509 | <p>We have a project on nginx/Django, using VirtualBox.
When we try to run command <code>VBoxManage list runningvms</code> in nginx, we have the next error:</p>
<pre class="lang-none prettyprint-override"><code>Failed to initialize COM because the global settings directory '/.config/VirtualBox' is not accessible!
</code></pre>
<p>If we run this command in console, it works fine. </p>
<p>What can we do to make it working good in nginx?</p>
<p>Other details:
nginx is runned by user "www-data", console - by the other user (Administrator). </p>
| 0 | 2016-10-18T13:22:03Z | 40,113,772 | <p>We have fixed the issue.</p>
<ol>
<li>There was wrong environment variable "Home" (os.environ['HOME']). We changed it, and so the problem was gone. </li>
<li>Using Python API for VB instead of ssh can really help you with that problem, as <strong>RegularlyScheduledProgramming</strong> suggested - we added Python API too. </li>
</ol>
<p>Thanks!</p>
| 0 | 2016-10-18T16:45:00Z | [
"python",
"django",
"bash",
"nginx",
"virtualbox"
] |
How to prevent shell script injection on my local webpage? | 40,109,555 | <p>I have an openwrt router that I use as a local webserver and I created a webpage to dial USSD on my 3G modem, the script looks like this:</p>
<pre><code><html>
<title>CHECK USSD</title>
<body>
<?php
if($_POST['send']){
$ussd=$_POST['ussd'];
exec('ussd.py '.$ussd,$out);
echo "Result: ".$out[0];
}
?>
<form action="" method="post">
USSD :<input type="text" autofocus name="ussd" size="14" value="">
<input name="send" type="submit" value="Send Ussd">
</form>
</body>
</html>
</code></pre>
<p><code>ussd.py</code> is the python script that I use to check USSD. The problem is that when a user tries to input some kind of script in the input box, like: <code>$(echo "hacked" > /www/index.html)</code> or <code>$(rm -fr /root/*)</code> those scripts get executed as well. So people can easily hack my router. How can I prevent that from happening ?</p>
| 1 | 2016-10-18T13:24:29Z | 40,109,724 | <p>The command you're looking for is <code>escapeshellarg()</code> so that your string is treated as a single argument.</p>
<pre><code>$pattern = '/Some regex that matches your potential inputs/';
if (preg_match($pattern, $ussd)) {
exec('ussd.py '.escapeshellarg($ussd),$out);
} else {
//throw error / response to user
}
</code></pre>
<p>Generally speaking though, executing directly from user input is not considered safe. Hence, find a regex that corresponds to a pattern in your potential input commands, and ensure that it matches before proceeding (You have specified that hardcoded commands are not an option).</p>
| 0 | 2016-10-18T13:31:43Z | [
"php",
"python",
"shell"
] |
Python GUI program in wxPython won't run | 40,109,565 | <p>I have the following code, and I am following a tutorial:</p>
<p>(<a href="http://zetcode.com/wxpython/layout/" rel="nofollow">http://zetcode.com/wxpython/layout/</a> - GoToClass part)</p>
<p>I can't figure out what is wrong with it ... :/</p>
<p>As you can see in the tutorial, it is supposed to produce this:</p>
<p><a href="https://i.stack.imgur.com/JbT3I.png" rel="nofollow"><img src="https://i.stack.imgur.com/JbT3I.png" alt="enter image description here"></a></p>
<p>The code:</p>
<pre><code>#!/usr/bin/python
# -*- coding: utf-8 -*-
# gotoclass.py
import wx
class Example(wx.Frame):
def __init__(self, parent, title):
super(Example, self).__init__(parent, title=title,
size=(390, 350))
self.InitUI()
self.Centre()
self.Show()
def InitUI(self):
panel = wx.Panel(self)
font = wx.SystemSettings_GetFont(wx.SYS_SYSTEM_FONT)
font.SetPointSize(9)
vbox = wx.BoxSizer(wx.VERTICAL)
hbox1 = wx.BoxSizer(wx.HORIZONTAL)
st1 = wx.StaticText(panel, label='Class Name')
st1.SetFont(font)
hbox1.Add(st1, flag=wx.RIGHT, border=8)
tc = wx.TextCtrl(panel)
hbox1.Add(tc, proportion=1)
vbox.Add(hbox1, flag=wx.EXPAND|wx.LEFT|wx.RIGHT|wx.TOP, border=10)
vbox.Add((-1, 10))
hbox2 = wx.BoxSizer(wx.HORIZONTAL)
st2 = wx.StaticText(panel, label='Matching Classes')
st2.SetFont(font)
hbox2.Add(st2)
vbox.Add(hbox2, flag=wx.LEFT | wx.TOP, border=10)
vbox.Add((-1, 10))
hbox3 = wx.BoxSizer(wx.HORIZONTAL)
tc2 = wx.TextCtrl(panel, style=wx.TE_MULTILINE)
hbox3.Add(tc2, proportion=1, flag=wx.EXPAND)
vbox.Add(hbox3, proportion=1, flag=wx.LEFT|wx.RIGHT|wx.EXPAND,
border=10)
vbox.Add((-1, 25))
hbox4 = wx.BoxSizer(wx.HORIZONTAL)
cb1 = wx.CheckBox(panel, label='Case Sensitive')
cb1.SetFont(font)
hbox4.Add(cb1)
cb2 = wx.CheckBox(panel, label='Nested Classes')
cb2.SetFont(font)
hbox4.Add(cb2, flag=wx.LEFT, border=10)
cb3 = wx.CheckBox(panel, label='Non-Project classes')
cb3.SetFont(font)
hbox4.Add(cb3, flag=wx.LEFT, border=10)
vbox.Add(hbox4, flag=wx.LEFT, border=10)
vbox.Add((-1, 25))
hbox5 = wx.BoxSizer(wx.HORIZONTAL)
btn1 = wx.Button(panel, label='Ok', size=(70, 30))
hbox5.Add(btn1)
btn2 = wx.Button(panel, label='Close', size=(70, 30))
hbox5.Add(btn2, flag=wx.LEFT|wx.BOTTOM, border=5)
vbox.Add(hbox5, flag=wx.ALIGN_RIGHT|wx.RIGHT, border=10)
panel.SetSizer(vbox)
if __name__ == '__main__':
app = wx.App()
Example(None, title='Go To Class')
app.MainLoop()
</code></pre>
<p>The following are the errors I'm getting: (Removed the path to the file</p>
<pre><code>Traceback (most recent call last):
File "[path]", line 78, in <module>
Example(None, title='Go To Class')
File "[path]", line 14, in __init__
self.InitUI()
File "[path]", line 21, in InitUI
font = wx.SystemSettings_GetFont(wx.SYS_SYSTEM_FONT)
AttributeError: 'module' object has no attribute 'SystemSettings_GetFont'
</code></pre>
| 0 | 2016-10-18T13:24:55Z | 40,109,753 | <p>It's just a typo. The correct way is <code>wx.SystemSettings.GetFont()</code>, also see this: <a href="https://wxpython.org/Phoenix/docs/html/wx.SystemSettings.html#wx.SystemSettings.GetFont" rel="nofollow">https://wxpython.org/Phoenix/docs/html/wx.SystemSettings.html#wx.SystemSettings.GetFont</a></p>
<p>Change your InitUI function to this:</p>
<pre><code>def InitUI(self):
panel = wx.Panel(self)
font = wx.SystemSettings.GetFont(wx.SYS_SYSTEM_FONT)
font.SetPointSize(9)
...
</code></pre>
<p>Hope this helps!</p>
| 3 | 2016-10-18T13:33:11Z | [
"python",
"wxpython"
] |
Fastest way to Factor (Prime-1)/2 for 64-bit Prime? | 40,109,623 | <p>I'm trying to gather some statistics on prime numbers, among which is the distribution of factors for the number (prime-1)/2. I know there are general formulas for the size of factors of uniformly selected numbers, but I haven't seen anything about the distribution of factors of one less than a prime.</p>
<p>I've written a program to iterate through primes starting at the first prime after 2^63, and then factor the (prime - 1)/2 using trial division by all primes up to 2^32. However, this is extremely slow because that is a lot of primes (and a lot of memory) to iterate through. I store the primes as a single byte each (by storing the increment from one prime to the next). I also use a deterministic variant of the Miller-Rabin primality test for numbers up to 2^64, so I can easily detect when the remaining value (after a successful division) is prime.</p>
<p>I've experimented using a variant of pollard-rho and elliptic curve factorization, but it is hard to find the right balance of between trial division and switching to these more complicated methods. Also I'm not sure I've implemented them correctly, because sometimes they seem to take a very lone time to find a factor, and based on their asymptotic behavior, I'd expect them to be quite quick for such small numbers.</p>
<p>I have not found any information on factoring many numbers (vs just trying to factor one), but it seems like there should be some way to speed up the task by taking advantage of this.</p>
<p>Any suggestions, pointers to alternate approaches, or other guidance on this problem is greatly appreciated.</p>
<hr>
<p>Edit:
The way I store the primes is by storing an 8-bit offset to the next prime, with the implicit first prime being 3. Thus, in my algorithms, I have a separate check for division by 2, then I start a loop:</p>
<pre class="lang-py prettyprint-override"><code>factorCounts = collections.Counter()
while N % 2 == 0:
factorCounts[2] += 1
N //= 2
pp = 3
for gg in smallPrimeGaps:
if pp*pp > N:
break
if N % pp == 0:
while N % pp == 0:
factorCounts[pp] += 1
N //= pp
pp += gg
</code></pre>
<p>Also, I used a wheel sieve to calculate the primes for trial division, and I use an algorithm based on the remainder by several primes to get the next prime after the given starting point.</p>
<hr>
<p>I use the following for testing if a given number is prime (porting code to c++ now):</p>
<pre class="lang-cpp prettyprint-override"><code>bool IsPrime(uint64_t n)
{
if(n < 341531)
return MillerRabinMulti(n, {9345883071009581737ull});
else if(n < 1050535501)
return MillerRabinMulti(n, {336781006125ull, 9639812373923155ull});
else if(n < 350269456337)
return MillerRabinMulti(n, {4230279247111683200ull, 14694767155120705706ull, 1664113952636775035ull});
else if(n < 55245642489451)
return MillerRabinMulti(n, {2ull, 141889084524735ull, 1199124725622454117, 11096072698276303650});
else if(n < 7999252175582851)
return MillerRabinMulti(n, {2ull, 4130806001517ull, 149795463772692060ull, 186635894390467037ull, 3967304179347715805ull});
else if(n < 585226005592931977)
return MillerRabinMulti(n, {2ull, 123635709730000ull, 9233062284813009ull, 43835965440333360ull, 761179012939631437ull, 1263739024124850375ull});
else
return MillerRabinMulti(n, {2ull, 325ull, 9375ull, 28178ull, 450775ull, 9780504ull, 1795265022ull});
}
</code></pre>
| 1 | 2016-10-18T13:27:25Z | 40,110,783 | <p>This is how I store primes for later:
(I'm going to assume you want the factors of the number, and not just a primality test).</p>
<p>Copied from my website <a href="http://chemicaldevelopment.us/programming/2016/10/03/PGS.html" rel="nofollow">http://chemicaldevelopment.us/programming/2016/10/03/PGS.html</a></p>
<p>Iâm going to assume you know the binary number system for this part. If not, just think of 1 as a âyesâ and 0 as a ânoâ.</p>
<p>So, there are plenty of algorithms to generate the first few primes. I use the Sieve of Eratosthenes to compute a list.</p>
<p>But, if we stored the primes as an array, like [2, 3, 5, 7] this would take up too much space. How much space exactly?</p>
<p>Well, 32 bit integers which can store up to 2^32 each take up 4 bytes because each byte is 8 bits, and 32 / 8 = 4</p>
<p>If we wanted to store each prime under 2,000,000,000, we would have to store over 98,000,000,000. This takes up more space, and is slower at runtime than a bitset, which is explained below.</p>
<p>This approach will take 98,000,000 integers of space (each is 32 bits, which is 4 bytes), and when we check at runtime, we will need to check every integer in the array until we find it, or we find a number that is greater than it.</p>
<p>For example, say I give you a small list of primes: [2, 3, 5, 7, 11, 13, 17, 19]. I ask you if 15 is prime. How do you tell me?</p>
<p>Well, you would go through the list and compare each to 15.</p>
<p>Is 2 = 15?</p>
<p>Is 3 = 15?</p>
<p>. . .</p>
<p>Is 17 = 15?</p>
<p>At this point, you can stop because you have passed where 15 would be, so you know it isnât prime.</p>
<p>Now then, letâs say we use a list of bits to tell you if the number is prime. The list above would look like:</p>
<p>001101010001010001010</p>
<p>This starts at 0, and goes to 19</p>
<p>The 1s mean that the index is prime, so count from the left: 0, 1, 2</p>
<p><strong>001</strong>101010001010001010</p>
<p>The last number in bold is 1, which indicates that 2 is prime.</p>
<p>In this case, if I asked you to check if 15 is prime, you donât need to go through all the numbers in the list; All you need to do is skip to 0 . . . 15, and check that single bit.</p>
<p>And for memory usage, the first approach uses 98000000 integers, whereas this one can store 32 numbers in a single integer (using the list of 1s and 0s), so we would need
2000000000/32=62500000 integers.</p>
<p>So it uses about 60% as much memory as the first approach, and is much faster to use.</p>
<p>We store the array of integers from the second approach in a file, then read it back when you run.</p>
<p>This uses 250MB of ram to store data on the first 2000000000 primes.</p>
<p>You can further reduce this with wheel sieving (like what you did storing (prime-1)/2)</p>
<p>I'll go a little bit more into wheel sieve.</p>
<p>You got it right by storing (prime - 1)/2, and 2 being a special case.</p>
<p>You can extend this to <strong>p#</strong> (the product of the first <strong>p</strong> primes)</p>
<p>For example, you use <strong>(1#)*k+1</strong> for numbers <strong>k</strong></p>
<p>You can also use the set of linear equations <strong>(n#)*k+L</strong>, where <strong>L</strong> is the set of primes less than <strong>n#</strong> and 1 excluding the first <strong>n</strong> primes.</p>
<p>So, you can also just store info for <strong>6*k+1</strong> and <strong>6*k+5</strong>, and even more than that, because <strong>L={1, 2, 3, 5}{2, 3}</strong></p>
<p>These methods should give you an understanding of some the methods behind it.</p>
<p>You will need someway to implement this bitset, such as a list of 32 bit integers, or a string.</p>
<p>Look at: <a href="https://pypi.python.org/pypi/bitarray" rel="nofollow">https://pypi.python.org/pypi/bitarray</a> for a possible abstraction</p>
| 0 | 2016-10-18T14:20:51Z | [
"python",
"c++",
"prime-factoring",
"factoring"
] |
Fastest way to Factor (Prime-1)/2 for 64-bit Prime? | 40,109,623 | <p>I'm trying to gather some statistics on prime numbers, among which is the distribution of factors for the number (prime-1)/2. I know there are general formulas for the size of factors of uniformly selected numbers, but I haven't seen anything about the distribution of factors of one less than a prime.</p>
<p>I've written a program to iterate through primes starting at the first prime after 2^63, and then factor the (prime - 1)/2 using trial division by all primes up to 2^32. However, this is extremely slow because that is a lot of primes (and a lot of memory) to iterate through. I store the primes as a single byte each (by storing the increment from one prime to the next). I also use a deterministic variant of the Miller-Rabin primality test for numbers up to 2^64, so I can easily detect when the remaining value (after a successful division) is prime.</p>
<p>I've experimented using a variant of pollard-rho and elliptic curve factorization, but it is hard to find the right balance of between trial division and switching to these more complicated methods. Also I'm not sure I've implemented them correctly, because sometimes they seem to take a very lone time to find a factor, and based on their asymptotic behavior, I'd expect them to be quite quick for such small numbers.</p>
<p>I have not found any information on factoring many numbers (vs just trying to factor one), but it seems like there should be some way to speed up the task by taking advantage of this.</p>
<p>Any suggestions, pointers to alternate approaches, or other guidance on this problem is greatly appreciated.</p>
<hr>
<p>Edit:
The way I store the primes is by storing an 8-bit offset to the next prime, with the implicit first prime being 3. Thus, in my algorithms, I have a separate check for division by 2, then I start a loop:</p>
<pre class="lang-py prettyprint-override"><code>factorCounts = collections.Counter()
while N % 2 == 0:
factorCounts[2] += 1
N //= 2
pp = 3
for gg in smallPrimeGaps:
if pp*pp > N:
break
if N % pp == 0:
while N % pp == 0:
factorCounts[pp] += 1
N //= pp
pp += gg
</code></pre>
<p>Also, I used a wheel sieve to calculate the primes for trial division, and I use an algorithm based on the remainder by several primes to get the next prime after the given starting point.</p>
<hr>
<p>I use the following for testing if a given number is prime (porting code to c++ now):</p>
<pre class="lang-cpp prettyprint-override"><code>bool IsPrime(uint64_t n)
{
if(n < 341531)
return MillerRabinMulti(n, {9345883071009581737ull});
else if(n < 1050535501)
return MillerRabinMulti(n, {336781006125ull, 9639812373923155ull});
else if(n < 350269456337)
return MillerRabinMulti(n, {4230279247111683200ull, 14694767155120705706ull, 1664113952636775035ull});
else if(n < 55245642489451)
return MillerRabinMulti(n, {2ull, 141889084524735ull, 1199124725622454117, 11096072698276303650});
else if(n < 7999252175582851)
return MillerRabinMulti(n, {2ull, 4130806001517ull, 149795463772692060ull, 186635894390467037ull, 3967304179347715805ull});
else if(n < 585226005592931977)
return MillerRabinMulti(n, {2ull, 123635709730000ull, 9233062284813009ull, 43835965440333360ull, 761179012939631437ull, 1263739024124850375ull});
else
return MillerRabinMulti(n, {2ull, 325ull, 9375ull, 28178ull, 450775ull, 9780504ull, 1795265022ull});
}
</code></pre>
| 1 | 2016-10-18T13:27:25Z | 40,133,079 | <p>I don't have a definitive answer, but I do have some observations and some suggestions.</p>
<p>There are about 2*10^17 primes between 2^63 and 2^64, so any program you write is going to run for a while.</p>
<p>Let's talk about a primality test for numbers in the range 2^63 to 2^64. Any general-purpose test will do more work than you need, so you can speed things up by writing a special-purpose test. I suggest strong-pseudoprime tests (as in Miller-Rabin) to bases 2 and 3. If either of those tests shows the number is composite, you're done. Otherwise, look up the number (binary search) in a table of strong-pseudoprimes to bases 2 and 3 (ask Google to find those tables for you). Two strong pseudoprime tests followed by a table lookup will certainly be faster than the deterministic Miller-Rabin test you are currently performing, which probably uses six or seven bases.</p>
<p>For factoring, trial division to 1000 followed by Brent-Rho until the product of the known prime factors exceeds the cube root of the number being factored ought to be fairly fast, a few milliseconds. Then, if the remaining cofactor is composite, it will necessarily have only two factors, so SQUFOF would be a good algorithm to split them, faster than the other methods because all the arithmetic is done with numbers less than the square root of the number being factored, which in your case means the factorization could be done using 32-bit arithmetic instead of 64-bit arithmetic, so it ought to be fast.</p>
<p>Instead of factoring and primality tests, a better method uses a variant of the Sieve of Eratosthenes to factor large blocks of numbers. That will still be slow, as there are 203 million sieving primes less than 2^32, and you will need to deal with the bookkeeping of a segmented sieve, but considering that you factor lots of numbers at once, it's probably the best approach to your task.</p>
<p>I have code for everything mentioned above at <a href="http://programmingpraxis.com" rel="nofollow">my blog</a>.</p>
| 1 | 2016-10-19T13:41:18Z | [
"python",
"c++",
"prime-factoring",
"factoring"
] |
Scrapy - How to load html string into open_in_browser function | 40,109,782 | <p>I am working on some code which returns an <code>HTML</code> string (<code>my_html</code>). I want to see how this looks in a browser using <a href="https://doc.scrapy.org/en/latest/topics/debug.html#open-in-browser" rel="nofollow">https://doc.scrapy.org/en/latest/topics/debug.html#open-in-browser</a>. To do this I've tried to create a response object with body set to '<code>my_html</code>'. I've tried a bunch of things including:</p>
<pre><code>new_response = TextResponse(body=my_html)
open_in_browser(new_response)
</code></pre>
<p>based on the response class (<a href="https://doc.scrapy.org/en/latest/topics/request-response.html#response-objects" rel="nofollow">https://doc.scrapy.org/en/latest/topics/request-response.html#response-objects</a>). I'm getting:</p>
<pre><code>new_response = TextResponse(body=my_html)
File "c:\scrapy\http\response\text.py", line 27, in __init__
super(TextResponse, self).__init__(*args, **kwargs)
TypeError: __init__() takes at least 2 arguments (2 given)
</code></pre>
<p>How can I get this working?</p>
| 0 | 2016-10-18T13:34:10Z | 40,110,049 | <p>Your error seems to be with the <code>TextResponse</code> initialization, <a href="https://doc.scrapy.org/en/latest/topics/request-response.html#textresponse-objects" rel="nofollow">according to the docs,</a> you need to initialize it with a URL, <code>TextResponse("http://www.expample.com")</code> should do it.</p>
<p>It looks like you are looking at the <code>Response</code> object docs and trying to use <code>TextResponse</code> like you would <code>Response</code>, by the looks of your optional argument and link to the docs.</p>
| 1 | 2016-10-18T13:47:18Z | [
"python",
"scrapy"
] |
Scrapy - How to load html string into open_in_browser function | 40,109,782 | <p>I am working on some code which returns an <code>HTML</code> string (<code>my_html</code>). I want to see how this looks in a browser using <a href="https://doc.scrapy.org/en/latest/topics/debug.html#open-in-browser" rel="nofollow">https://doc.scrapy.org/en/latest/topics/debug.html#open-in-browser</a>. To do this I've tried to create a response object with body set to '<code>my_html</code>'. I've tried a bunch of things including:</p>
<pre><code>new_response = TextResponse(body=my_html)
open_in_browser(new_response)
</code></pre>
<p>based on the response class (<a href="https://doc.scrapy.org/en/latest/topics/request-response.html#response-objects" rel="nofollow">https://doc.scrapy.org/en/latest/topics/request-response.html#response-objects</a>). I'm getting:</p>
<pre><code>new_response = TextResponse(body=my_html)
File "c:\scrapy\http\response\text.py", line 27, in __init__
super(TextResponse, self).__init__(*args, **kwargs)
TypeError: __init__() takes at least 2 arguments (2 given)
</code></pre>
<p>How can I get this working?</p>
| 0 | 2016-10-18T13:34:10Z | 40,110,170 | <p><a href="https://docs.scrapy.org/en/latest/topics/request-response.html#scrapy.http.TextResponse" rel="nofollow"><code>TextResponse</code> expects a URL as first argument</a>:</p>
<pre><code>>>> scrapy.http.TextResponse('http://www.example.com')
<200 http://www.example.com>
>>>
</code></pre>
<p>If you want to pass a body, you still need a URL as first argument:</p>
<pre><code>>>> scrapy.http.TextResponse(body='<html><body>Oh yeah!</body></html>')
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/http/response/text.py", line 27, in __init__
super(TextResponse, self).__init__(*args, **kwargs)
TypeError: __init__() takes at least 2 arguments (2 given)
>>> scrapy.http.TextResponse('http://www.example.com', body='<html><body>Oh yeah!</body></html>')
<200 http://www.example.com>
</code></pre>
| 1 | 2016-10-18T13:53:03Z | [
"python",
"scrapy"
] |
How to use Adobe afm fonts in matplotlib text? | 40,109,850 | <p>I want to add a text to a figure using an AFM font. I know that I can pass the <code>fontproperties</code> or the <code>fontname</code> keyword argument when creating a text. </p>
<p>Regarding the usage of AFM fonts in matplotlib, I found <a href="http://matplotlib.org/api/afm_api.html" rel="nofollow">this</a> and <a href="http://matplotlib.org/api/font_manager_api.html#matplotlib.font_manager.afmFontProperty" rel="nofollow">this</a>. </p>
<p>I can't pass a Font instance create by <code>matplotlib.font_manager.afmFontProperty</code> as <code>fontproperties</code> kwarg.</p>
<p>The font I intend to use is URW Chancery L and located in <code>/usr/share/fonts/type1/gsfonts/z003034l.afm</code>. How can I make matplotlib use this font?</p>
<p>Also I looked for converters from afm to ttf but could not find any, maybe you have a suggestion?</p>
<p>I'm using matplotlib 1.5.3 on Ubuntu 16.04.</p>
| 0 | 2016-10-18T13:37:48Z | 40,117,876 | <p>What is an "AFM font"? AFM files are <strong>A</strong>dobe <strong>F</strong>ont <strong>M</strong>etrics files, which only contain metadata around glyph bounds, kerning pairs, etc. as a convenient lookup mechanism when you don't want to mine the real font file for that information (handy for typesetting, where having the metrics available as separate resource makes things a hell of a lot faster), they are not themselves fonts in any way. You would still need a (now mostly defunct) <code>.pfa</code> or <code>.pfb</code> font file ("printer font; ascii" and "printer font; binary", respectively) to do any kind of actual text rendering. Without those, all you can do is mark the appropriate region in which text <em>would</em> be drawn if you also had the font itself available =)</p>
<p>(this is actually what TeX and PDF do - they use the font metrics to construct "empty boxes" inside of which text will ultimately be rendered once all the typesetting has been determined and the boxes no longer move around)</p>
| 0 | 2016-10-18T20:52:42Z | [
"python",
"matplotlib",
"fonts",
"adobe"
] |
Python numpy.fft changes strides | 40,109,915 | <p>Dear stackoverflow community!</p>
<p>Today I found that on a high-end cluster architecture, an elementwise multiplication of 2 cubes with dimensions 1921 x 512 x 512 takes ~ 27 s. This is much too long since I have to perform such computations at least 256 times for an azimuthal averaging of a power spectrum in the current implementation. I found that the slow performance was mainly due to different stride structures (C in one case and FORTRAN in the other). One of the two arrays was a newly generated boolean grid (C order) and the other one (FORTRAN order) came from the 3D <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftn.html#numpy.fft.fftn" rel="nofollow">numpy.fft.fftn()</a> Fourier transform of an input grid (C order). Any reasons why numpy.fft.fftn() changes the strides and ideas on how to prevent that except for reversing the axes (which would be just a workaround)? With similar strides (<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.copy.html" rel="nofollow">ndarray.copy()</a> of the FT grid), ~ 4s are achievable, a tremendous improvement.</p>
<p>The question is therefore as following:</p>
<p>Consider the array:</p>
<pre><code>ran = np.random.rand(1921, 512, 512)
ran.strides
(2097152, 4096, 8)
a = np.fft.fftn(ran)
a.strides
(16, 30736, 15736832)
</code></pre>
<p>As we can see the stride structure is different. How can this be prevented (without using a = np.fft.fftn(ran, axes = (1,0)))? Are there any other numpy array routines that could affect stride structure? What can be done in those cases?</p>
<p>Helpful advice is as usual much appreciated!</p>
| 1 | 2016-10-18T13:40:54Z | 40,116,293 | <p>You could use scipy.fftpack.fftn (as suggested by hpaulj too) rather than numpy.fft.fftn, looks like it's doing what you want. It is however slightly less performing:</p>
<pre><code>import numpy as np
import scipy.fftpack
ran = np.random.rand(192, 51, 51) # not much memory on my laptop
a = np.fft.fftn(ran)
b = scipy.fftpack.fftn(ran)
ran.strides
(20808, 408, 8)
a.strides
(16, 3072, 156672)
b.strides
(41616, 816, 16)
timeit -n 100 np.fft.fftn(ran)
100 loops, best of 3: 37.3 ms per loop
timeit -n 100 scipy.fftpack.fftn(ran)
100 loops, best of 3: 41.3 ms per loop
</code></pre>
| 2 | 2016-10-18T19:16:52Z | [
"python",
"arrays",
"numpy",
"memory-management",
"fft"
] |
Python numpy.fft changes strides | 40,109,915 | <p>Dear stackoverflow community!</p>
<p>Today I found that on a high-end cluster architecture, an elementwise multiplication of 2 cubes with dimensions 1921 x 512 x 512 takes ~ 27 s. This is much too long since I have to perform such computations at least 256 times for an azimuthal averaging of a power spectrum in the current implementation. I found that the slow performance was mainly due to different stride structures (C in one case and FORTRAN in the other). One of the two arrays was a newly generated boolean grid (C order) and the other one (FORTRAN order) came from the 3D <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.fftn.html#numpy.fft.fftn" rel="nofollow">numpy.fft.fftn()</a> Fourier transform of an input grid (C order). Any reasons why numpy.fft.fftn() changes the strides and ideas on how to prevent that except for reversing the axes (which would be just a workaround)? With similar strides (<a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.copy.html" rel="nofollow">ndarray.copy()</a> of the FT grid), ~ 4s are achievable, a tremendous improvement.</p>
<p>The question is therefore as following:</p>
<p>Consider the array:</p>
<pre><code>ran = np.random.rand(1921, 512, 512)
ran.strides
(2097152, 4096, 8)
a = np.fft.fftn(ran)
a.strides
(16, 30736, 15736832)
</code></pre>
<p>As we can see the stride structure is different. How can this be prevented (without using a = np.fft.fftn(ran, axes = (1,0)))? Are there any other numpy array routines that could affect stride structure? What can be done in those cases?</p>
<p>Helpful advice is as usual much appreciated!</p>
| 1 | 2016-10-18T13:40:54Z | 40,142,164 | <blockquote>
<p>Any reasons why numpy.fft.fftn() changes the strides and ideas on how to prevent that except for reversing the axes (which would be just a workaround)?</p>
</blockquote>
<p>Computing the multidimensionnal DFT of an array consists in successively computing 1D DTFs over each dimensions. There are two strategies:</p>
<ol>
<li>Restrict 1D DTF computations to contiguous 1D arrays. As the array is contiguous, problem related to latency/cache misses will be reduced. This strategy has a major drawback: the array is to be transposed between each dimension. It is likely the strategy adopted by <code>numpy.fft</code>. At the end of computations, the array has been transposed. To avoid unnecessary computations, the transposed array is returned and strides are modified.</li>
<li>Enable 1D DDFT computations for strided arrays. This might trigger some problem related to latency. It is the strategy of <code>fftw</code>, avaible through the interface <code>pyfftw</code>. As a result, the output array features the same strides as the input array.</li>
</ol>
<p>Timing <code>numpy.fftn</code> and <code>pyfftw.numpy.fftn</code> as performed <a href="http://stackoverflow.com/questions/40061307/comparatively-slow-python-numpy-3d-fourier-transformation/40064319?noredirect=1#comment67488042_40064319">here</a> and <a href="http://stackoverflow.com/questions/6365623/improving-fft-performance-in-python">there</a> or <a href="https://gist.github.com/fnielsen/99b981b9da34ae3d5035" rel="nofollow">there</a> will tell you whether FFTW is really the Fastest Fourier Transform in the West or not...</p>
<ul>
<li><p>To check that numpy uses the first strategy, take a look at <code>numpy/fft/fftpack.py</code>. At line 81-85, the call to <code>work_function(a, wsave)</code> (i.e. <code>fftpack.cfftf</code>, from <a href="http://www.netlib.org/fftpack/" rel="nofollow">FFTPACK</a>, arguments documented <a href="https://docs.oracle.com/cd/E19422-01/819-3691/cfftf.html" rel="nofollow">there</a>) is enclosed between calls to <code>numpy.swapaxes()</code> performing the transpositions.</p></li>
<li><p><code>scipy.fftpack.fftn</code> does not seem to change the strides... Nevertheless, it seems that it makes use of the first strategy. <a href="https://github.com/scipy/scipy/blob/v0.18.1/scipy/fftpack/basic.py" rel="nofollow"><code>scipy.fftpack.fftn()</code></a> calls <a href="https://github.com/scipy/scipy/blob/master/scipy/fftpack/src/zfftnd.c" rel="nofollow"><code>scipy.fftpack.zfftnd()</code></a> which calls <a href="https://github.com/scipy/scipy/blob/master/scipy/fftpack/src/zfft.c" rel="nofollow"><code>zfft()</code></a>, based on <a href="https://github.com/scipy/scipy/blob/master/scipy/fftpack/src/dfftpack/zfftf1.f" rel="nofollow"><code>zfftf1</code></a> which does not seem to handle strided DFTs. Moreover, <code>zfftnd()</code> calls many times the function <a href="https://github.com/scipy/scipy/blob/master/scipy/fftpack/src/zfftnd.c" rel="nofollow"><code>flatten()</code></a> which performs the transposition.</p></li>
<li><p>Another example: for parallel distributed memory multidimensionnal DFTs, <a href="http://www.fftw.org/fftw3_doc/Transposed-distributions.html#Transposed-distributions" rel="nofollow">FFTW-MPI uses the first strategy</a> to avoid any MPI communications between processes during 1D DTFs. Of course, <a href="http://www.fftw.org/fftw3_doc/FFTW-MPI-Transposes.html#FFTW-MPI-Transposes" rel="nofollow">functions to transpose the array</a> are not far away and a lot a MPI communications are involved in the process.</p></li>
</ul>
<blockquote>
<p>Are there any other numpy array routines that could affect stride structure? What can be done in those cases?</p>
</blockquote>
<p>You can <a href="https://github.com/numpy/numpy/search?p=1&q=swapaxes&utf8=%E2%9C%93" rel="nofollow">search the github repository of numpy for <code>swapaxes</code></a>: this funtion is only used a couple of times. Hence, to my mind, this "change of strides" is particular to <code>fft.fftn()</code> and most numpy functions keep the strides unchanged.</p>
<p>Finally, the "change of strides" is a feature of the first strategy and there is no way to prevent that. The only workaround is to swap the axes at the end of the computation, which is costly. But you can rely on <code>pyfftw</code> since <code>fftw</code> implements the second strategy in a very efficient way. The DFT computations will be faster, and subsequent computations will be faster as well if the strides of the different arrays become consistent.</p>
| 1 | 2016-10-19T21:56:14Z | [
"python",
"arrays",
"numpy",
"memory-management",
"fft"
] |
Anaconda OpenCV Arch Linux libselinux.so error | 40,110,207 | <p>I have installed Anaconda 64 bit on a relatively fresh install of Arch.</p>
<p>I followed the instructions <a href="https://rivercitylabs.org/up-and-running-with-opencv3-and-python-3-anaconda-edition/" rel="nofollow">here</a> to set up a virtual environment for opencv:</p>
<pre><code>conda create -n opencv numpy scipy scikit-learn matplotlib python=3
source activate opencv
conda install -c https://conda.binstar.org/menpo opencv3
</code></pre>
<p>When I run "import cv2" on the activated virtual environment I get:</p>
<pre><code>ImportError: libselinux.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>I have no clue how to fix this - do I need to make kernel changes?
Thanks for any help.</p>
| 0 | 2016-10-18T13:54:19Z | 40,112,167 | <p>disabled the selinux, then do the command </p>
<pre><code>yum reinstall glibc*
</code></pre>
| 0 | 2016-10-18T15:23:45Z | [
"python",
"linux",
"opencv",
"anaconda"
] |
Anaconda OpenCV Arch Linux libselinux.so error | 40,110,207 | <p>I have installed Anaconda 64 bit on a relatively fresh install of Arch.</p>
<p>I followed the instructions <a href="https://rivercitylabs.org/up-and-running-with-opencv3-and-python-3-anaconda-edition/" rel="nofollow">here</a> to set up a virtual environment for opencv:</p>
<pre><code>conda create -n opencv numpy scipy scikit-learn matplotlib python=3
source activate opencv
conda install -c https://conda.binstar.org/menpo opencv3
</code></pre>
<p>When I run "import cv2" on the activated virtual environment I get:</p>
<pre><code>ImportError: libselinux.so.1: cannot open shared object file: No such file or directory
</code></pre>
<p>I have no clue how to fix this - do I need to make kernel changes?
Thanks for any help.</p>
| 0 | 2016-10-18T13:54:19Z | 40,127,651 | <p>Fixed with installing the libselinux package in the AUR. I now have </p>
<pre><code>ImportError: /usr/lib/libpangoft2-1.0.so.0: undefined symbol: FcWeightToOpenType
</code></pre>
<p>will post if I solve</p>
<p>EDIT:
Solved as in issue <a href="https://github.com/ContinuumIO/anaconda-issues/issues/368" rel="nofollow">368</a></p>
<pre><code>conda install -c asmeurer pango
</code></pre>
| 0 | 2016-10-19T09:39:56Z | [
"python",
"linux",
"opencv",
"anaconda"
] |
Numerical keyboard Python | 40,110,222 | <p>I am recording responses during a simple calculation task in Python, and I am storing these in a string. I would like to use the numerical part of the keyboard, but these give for instance 'num_1' instead of '1'. It probably has something to do that I store the input as a Text Stimulus in PsychoPy.. Any way to get around this? </p>
<pre><code>CapturedResponseString = visual.TextStim(myWin,
units='norm',height = 0.2,
pos=(0,-0.40), text='',
alignHoriz = 'center',alignVert='center', color=[-1,-1,-1])
captured_string = '' #key presses will be captured in this string
</code></pre>
| 0 | 2016-10-18T13:54:57Z | 40,111,374 | <p>If all your responses are preceded by "num_" you can just amputate them. For example <code>int(CapturedResponseString[4:])</code> will grab the numerical portion and turn it into an integer. </p>
<p>Python has lots of string processing tools that are much more sophisticated than this, and they are all available to you when using Psychopy. For example you could also split at the underscore. <code>CapturedResponseString.split('_')</code> will return a list with the stuff before the underscore in the first position and the rest in the second (assuming only one underscore).</p>
| 2 | 2016-10-18T14:46:23Z | [
"python",
"psychopy"
] |
Python, scipy, curve_fit, bounds: How can I contstraint param by two intervals? | 40,110,260 | <p>I`m using scipy.optimize.curve_fit for fitting a sigmoidal curve to data. I need to bound one of parammeters from [-3, 0.5] and [0.5, 3.0]</p>
<p>I tried fit curve without bounds, and next if parameter is lower than zero, I fit once more with bounds [-3, 0.5] and in contrary with[0.5, 3.0]</p>
<p>Is it possible, to bound function curve_fit with two intervals?</p>
| 0 | 2016-10-18T13:56:33Z | 40,116,907 | <p>No, least_squares (hence curve_fit) only supports box constraints.</p>
| 0 | 2016-10-18T19:52:47Z | [
"python",
"scipy",
"curve-fitting"
] |
Python, scipy, curve_fit, bounds: How can I contstraint param by two intervals? | 40,110,260 | <p>I`m using scipy.optimize.curve_fit for fitting a sigmoidal curve to data. I need to bound one of parammeters from [-3, 0.5] and [0.5, 3.0]</p>
<p>I tried fit curve without bounds, and next if parameter is lower than zero, I fit once more with bounds [-3, 0.5] and in contrary with[0.5, 3.0]</p>
<p>Is it possible, to bound function curve_fit with two intervals?</p>
| 0 | 2016-10-18T13:56:33Z | 40,130,121 | <p>There is a crude way to do this, and that is to have your function return very large values if the parameter is outside the multiple bounds. For example:</p>
<pre><code>sigmoid_func(x, parameters):
if parameter outside multiple bounds:
return 1.0E10 * len(x) # very large number
else:
return sigmoid value
</code></pre>
<p>This has the effect of yielding very large errors if the parameter is outside of your multiple bounds. If you have single bound range of [upper, lower] tou should not use this method, since the most recent version of scipy already supports the more common single bound range type of problem.</p>
| 0 | 2016-10-19T11:26:30Z | [
"python",
"scipy",
"curve-fitting"
] |
Reading a binary file with memoryview | 40,110,306 | <p>I read a large file in the code below which has a special structure - among others two blocks that need be processed at the same time. Instead of seeking back and forth in the file I load these two blocks wrapped in <code>memoryview</code> calls</p>
<pre><code>with open(abs_path, 'rb') as bsa_file:
# ...
# load the file record block to parse later
file_records_block = memoryview(bsa_file.read(file_records_block_size))
# load the file names block
file_names_block = memoryview(bsa_file.read(total_file_name_length))
# close the file
file_records_index = names_record_index = 0
for folder_record in folder_records:
name_size = struct.unpack_from('B', file_records_block, file_records_index)[0]
# discard null terminator below
folder_path = struct.unpack_from('%ds' % (name_size - 1),
file_records_block, file_records_index + 1)[0]
file_records_index += name_size + 1
for __ in xrange(folder_record.files_count):
file_name_len = 0
for b in file_names_block[names_record_index:]:
if b != '\x00': file_name_len += 1
else: break
file_name = unicode(struct.unpack_from('%ds' % file_name_len,
file_names_block,names_record_index)[0])
names_record_index += file_name_len + 1
</code></pre>
<p>The file is correctly parsed, but as it's my first use of the mamoryview interface I am not sure I do it right. The file_names_block is composed as seen by null terminated c strings.</p>
<ol>
<li>Is my trick <code>file_names_block[names_record_index:]</code> using the memoryview magic or do I create some n^2 slices ? Would I need to use <code>islice</code> here ?</li>
<li>As seen I just look for the null byte manually and then proceed to <code>unpack_from</code>. But I read in <a href="http://stackoverflow.com/a/20024532/281545">How to split a byte string into separate bytes in python</a> that I can use <code>cast()</code> (docs ?) on the memory view - any way to use that (or another trick) to split the view in bytes ? Could I just call <code>split('\x00')</code> ? Would this preserve the memory efficiency ?</li>
</ol>
<p>I would appreciate insight on the one right way to do this (in python 2).</p>
| 0 | 2016-10-18T13:59:28Z | 40,113,126 | <p>A <code>memoryview</code> is not going to give you any advantages when it comes to null-terminated strings as they have no facilities for anything but fixed-width data. You may as well use <code>bytes.split()</code> here instead:</p>
<pre><code>file_names_block = bsa_file.read(total_file_name_length)
file_names = file_names_block.split(b'\00')
</code></pre>
<p>Slicing a <code>memoryview</code> doesn't use extra memory (other than the view parameters), but if using a cast you do produce new native objects for the parsed memory region the moment you try to access elements in the sequence.</p>
<p>You can still use the <code>memoryview</code> for the <code>file_records_block</code> parsing; those strings are prefixed by a length giving you the opportunity to use slicing. Just keep slicing bytes of the memory view as you process <code>folder_path</code> values, there's no need to keep an index:</p>
<pre><code>for folder_record in folder_records:
name_size = file_records_block[0] # first byte is the length, indexing gives the integer
folder_path = file_records_block[1:name_size].tobytes()
file_records_block = file_records_block[name_size + 1:] # skip the null
</code></pre>
<p>Because the <code>memoryview</code> was sourced from a <code>bytes</code> object, indexing will give you the integer value for a byte, <code>.tobytes()</code> on a given slice gives you a new <code>bytes</code> string for that section, and you can then continue to slice to leave the remainder for the next loop.</p>
| 1 | 2016-10-18T16:08:41Z | [
"python",
"python-2.7",
"binaryfiles",
"memoryview"
] |
Django queryset with isnull=True in get_object_or_404 | 40,110,309 | <p>I have 2 records in the posts table, one of the row in table has <strong>rating as NULL</strong> and the other has <strong>rating as 2</strong>, both have same user_id say 5</p>
<p>I implement this first</p>
<p><strong>views.py</strong></p>
<pre><code>class Rating(TemplateView):
template_name = 'base/rating.html'
def get(self,request,slug,*args,**kwargs):
user_id = request.user.id
post = get_object_or_404(Post.objects.filter(user_id=user_id,rating__isnull=True))
return render(request,self.template_name)
</code></pre>
<p><strong>urls.py</strong></p>
<pre><code>url(r'^post/addRating/(?P<slug>.+?)/$',views.Rating.as_view(),name="post_rating"),
</code></pre>
<blockquote>
<p>So the actual concept is not to render the view if rating column is
not null</p>
</blockquote>
<p>So the first record with rating null should return 404 page but it is not and the second record display properly</p>
<p>Can any one help me to fix it?</p>
| 0 | 2016-10-18T13:59:34Z | 40,110,557 | <p>You need read more carefully the django doc, the above code is incorrect.
To use get_object_or_404, you have to write something like(from django <a href="https://docs.djangoproject.com/en/1.10/topics/http/shortcuts/#get-object-or-404" rel="nofollow">doc</a>)</p>
<pre><code>from django.shortcuts import get_object_or_404
def my_view(request):
my_object = get_object_or_404(MyModel, pk=1)
</code></pre>
<p>For your propurse, you should write something like:</p>
<pre><code>posts = Post.objects.filter(user_id=user_id,rating__isnull=True)
post = posts and posts[0]
</code></pre>
| -1 | 2016-10-18T14:11:04Z | [
"python",
"django"
] |
How this Python generators based inorder traversal method works | 40,110,401 | <p>I am quite new to python and still exploring. Came across generators and below code snippet implementing inorder binary tree traversal using generators:</p>
<pre><code>def inorder(t):
if t:
for x in inorder(t.left):
yield x
yield t.label
for x in inorder(t.right):
yield x
</code></pre>
<p>Now I know following fact about generators: they remember the local variable values across calls. However this function is recursive. So how it remembers local variable values across these different recursive calls? </p>
<p>Also it was easy to understand normal recursive inorder program (not involving generators) as there were clear recursion termination conditions explicitly specified. But how this recursion with generators works?</p>
| -1 | 2016-10-18T14:03:50Z | 40,110,463 | <p><code>inorder</code> returns a generator. <em>That</em> object is what remembers its local state between calls to <code>next</code>. There is no overlap between generators created by separate calls to <code>inorder</code>, even when called recursively.</p>
| 2 | 2016-10-18T14:06:41Z | [
"python"
] |
How this Python generators based inorder traversal method works | 40,110,401 | <p>I am quite new to python and still exploring. Came across generators and below code snippet implementing inorder binary tree traversal using generators:</p>
<pre><code>def inorder(t):
if t:
for x in inorder(t.left):
yield x
yield t.label
for x in inorder(t.right):
yield x
</code></pre>
<p>Now I know following fact about generators: they remember the local variable values across calls. However this function is recursive. So how it remembers local variable values across these different recursive calls? </p>
<p>Also it was easy to understand normal recursive inorder program (not involving generators) as there were clear recursion termination conditions explicitly specified. But how this recursion with generators works?</p>
| -1 | 2016-10-18T14:03:50Z | 40,110,886 | <p>I modified the code somewhat to get the idea of the flow of the execution sequence. Basically I added some <code>print()</code> statements.</p>
<pre><code>class BinaryTreeNode():
def __init__(self, pLeft, pRight, pValue):
self.left = pLeft
self.right = pRight
self.label = pValue
def inorder(t):
print("at the beginning of inorder(t): " + (str(t.label) if t else "None" ))
if t:
for x in inorder(t.left):
print("inside inorder(t.left):" + str(t.label)) #delete
yield x
print("inside inorder(t):" + str(t.label)) #delete
yield t.label
for x in inorder(t.right):
print("inside inorder(t.right):" + str(t.label)) #delete
yield x
node1 = BinaryTreeNode(None,None,1)
node3 = BinaryTreeNode(None,None,3)
node2 = BinaryTreeNode(node1,node3,2)
node5 = BinaryTreeNode(None,None,5)
node4 = BinaryTreeNode(node2,node5,4)
root = node4
for i in inorder(root):
print(i)
</code></pre>
<p>The output is:</p>
<pre><code>1 at the beginning of inorder(t): 4
2 at the beginning of inorder(t): 2
3 at the beginning of inorder(t): 1
4 at the beginning of inorder(t): None
5 inside inorder(t):1
6 inside inorder(t.left):2
7 inside inorder(t.left):4
8 1
9 at the beginning of inorder(t): None
10 inside inorder(t):2
11 inside inorder(t.left):4
12 2
13 at the beginning of inorder(t): 3
14 at the beginning of inorder(t): None
15 inside inorder(t):3
16 inside inorder(t.right):2
17 inside inorder(t.left):4
18 3
19 at the beginning of inorder(t): None
20 inside inorder(t):4
21 4
22 at the beginning of inorder(t): 5
23 at the beginning of inorder(t): None
24 inside inorder(t):5
25 inside inorder(t.right):4
26 5
27 at the beginning of inorder(t): None
</code></pre>
<p>Notice that second call to <code>inorder(node4)</code> didnt print <code>at the beginning of inorder(t): 4</code> but it printed <code>at the beginning of inorder(t): None</code> (line 9 in output). That means generators also remembers the last executed line (mostly because it remembers program counter value in the last call).</p>
<p>Also every for loop obtains generator instance from the function <code>inorder()</code>. This generator is specific to for loop and hence there local scope is maintained separately.</p>
<p>Above traverses this tree:</p>
<pre><code> 4
/ \
2 5
/ \
1 3
</code></pre>
<p>Also the termination occurs when each of the recursive calls reach to its end. This result in following recursive call tree:</p>
<pre><code>==>inorder(<4>)
|---> x in inorder(left<2>)
|---> x in inorder(left<1>)
|---> x in inorder(left<None>) --> terminate
yield 1 (note the indention, it is not yield inside first for-in loop but after it)
yield 1 (note the indentation, this is yield inside first for-in loop)
yield 1
inorder(<4>)
|---> x in inorder(left<2>)
|---> x in inorder(left<1>)
==============================>|---> x in inorder(right<None>) --> terminate
yield 2
yield 2
inorder(<4>)
|---> x in inorder(left<2>)
================>|---> x in inorder(right<3>)
|---> x in inorder(left<None>) --> terminate
yield 3
yield 3
yield 3
inorder(<4>)
|---> x in inorder(left<2>)
|---> x in inorder(left<1>)
=============================>|---> x in inorder(right<None>) --> terminate
terminate
terminate
yield 4
inorder(4)
==>|---> x in inorder(right<5>)
|---> x in inorder(left<None>) --> terminate
yield 5
yield 5
inorder(4)
|---> x in inorder(right<5>)
===============>|---> x in inorder(right<None>) --> terminate
terminate
terminate
terminate
</code></pre>
<p>(explaination:</p>
<ul>
<li><code><i></code> means call with <code>nodei</code> as parameter</li>
<li><code>left<i></code> represents <code>inorder(t.left)</code> call inside first <code>for-in</code> loop where <code>t.left</code> is <code>nodei</code></li>
<li><code>right<i></code> represents <code>inorder(t.right)</code> call inside second <code>for-in</code> loop where <code>t.right</code> is <code>nodei</code></li>
<li><code>===></code> shows where execution begins in that particular call)</li>
</ul>
| 1 | 2016-10-18T14:25:20Z | [
"python"
] |
How to check if a string contains a dictionary | 40,110,468 | <p>I want to recursively parse all values in a dict that are strings with <code>ast.literal_eval(value)</code> but not do that eval if the string doesn't contain a dict. I want this, because I have a string in a dict that is a dict in itself and I would like the value to be a dict. Best to give an example</p>
<pre><code>my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': 'another string'}"}
</code></pre>
<p>Now I don't want a do to <code>ast.literal_eval(my_dict['c'])</code> I want a generic solution where I can do <code>convert_to_dict(my_dict)</code></p>
<p>I wanted to write my own method, but I don't know how to check if a string contains a dict, and then ast.literal_eval will fail, hence the question.</p>
| 4 | 2016-10-18T14:06:51Z | 40,110,943 | <p>If you need to handle nested <code>str</code> defining <code>dict</code>, <a href="https://docs.python.org/3/library/json.html#json.loads" rel="nofollow"><code>json.loads</code> with an <code>object_hook</code></a> might work for you:</p>
<pre><code>import json
def convert_subdicts(d):
for k, v in d.items():
try:
# Try to decode a dict
newv = json.loads(v, object_hook=convert_subdicts)
except Exception:
continue
else:
if isinstance(newv, dict):
d[k] = newv # Replace with decoded dict
return d
origdict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': 'another string'}"}
newdict = convert_subdicts(origdict.copy()) # Omit .copy() if mutating origdict okay
</code></pre>
<p>That should recursively handle the case where the contained <code>dict</code>s might contain <code>str</code>s values that define subdicts. If you don't need to handle that case, you can omit the use of the <code>object_hook</code>, or replace <code>json.loads</code> entirely with <code>ast.literal_eval</code>.</p>
| 0 | 2016-10-18T14:27:44Z | [
"python",
"string",
"python-3.x",
"dictionary",
"recursion"
] |
How to check if a string contains a dictionary | 40,110,468 | <p>I want to recursively parse all values in a dict that are strings with <code>ast.literal_eval(value)</code> but not do that eval if the string doesn't contain a dict. I want this, because I have a string in a dict that is a dict in itself and I would like the value to be a dict. Best to give an example</p>
<pre><code>my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': 'another string'}"}
</code></pre>
<p>Now I don't want a do to <code>ast.literal_eval(my_dict['c'])</code> I want a generic solution where I can do <code>convert_to_dict(my_dict)</code></p>
<p>I wanted to write my own method, but I don't know how to check if a string contains a dict, and then ast.literal_eval will fail, hence the question.</p>
| 4 | 2016-10-18T14:06:51Z | 40,110,967 | <p>The general idea referenced in my above comment is to run thru the dictionary and try and evaluate. Store that in a local variable, and then check if that evaluated expression is a dictionary. If so, then reassign it to the passed input. If not, leave it alone. </p>
<pre><code>my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': 'another string'}"}
def convert_to_dict(d):
for key, val in d.items():
try:
check = ast.literal_eval(val)
except:
continue
if isinstance(check, dict):
d[key] = check
return d
convert_to_dict(my_dict)
</code></pre>
| 1 | 2016-10-18T14:28:40Z | [
"python",
"string",
"python-3.x",
"dictionary",
"recursion"
] |
How to check if a string contains a dictionary | 40,110,468 | <p>I want to recursively parse all values in a dict that are strings with <code>ast.literal_eval(value)</code> but not do that eval if the string doesn't contain a dict. I want this, because I have a string in a dict that is a dict in itself and I would like the value to be a dict. Best to give an example</p>
<pre><code>my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': 'another string'}"}
</code></pre>
<p>Now I don't want a do to <code>ast.literal_eval(my_dict['c'])</code> I want a generic solution where I can do <code>convert_to_dict(my_dict)</code></p>
<p>I wanted to write my own method, but I don't know how to check if a string contains a dict, and then ast.literal_eval will fail, hence the question.</p>
| 4 | 2016-10-18T14:06:51Z | 40,111,122 | <p>You can check if you have a dict after using <em>literal_eval</em> and reassign:</p>
<pre><code>from ast import literal_eval
def reassign(d):
for k, v in d.items():
try:
evald = literal_eval(v)
if isinstance(evald, dict):
d[k] = evald
except ValueError:
pass
</code></pre>
<p>Just pass in the dict:</p>
<pre><code>In [2]: my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': 'another stri
...: ng'}"}
In [3]: reassign(my_dict)
In [4]: my_dict
Out[4]: {'a': 42, 'b': 'my_string', 'c': {'d': 33, 'e': 'another string'}}
In [5]: my_dict = {'a': '42', 'b': "my_string", '5': "{'d': 33, 'e': 'another st
...: ring', 'other_dict':{'foo':'bar'}}"}
In [6]: reassign(my_dict)
In [7]: my_dict
Out[7]:
{'5': {'d': 33, 'e': 'another string', 'other_dict': {'foo': 'bar'}},
'a': '42',
'b': 'my_string'}
</code></pre>
<p>You should also be aware that if you had certain other objects in the dict like <em>datetime</em> objects etc.. then literal_eval would fail so it really depends on what your dict can contain as to whether it will work or not.</p>
<p>If you need a recursive approach, all you need is to call reassign on the new dict.</p>
<pre><code>def reassign(d):
for k, v in d.items():
try:
evald = literal_eval(v)
if isinstance(evald, dict):
d[k] = evald
reassign(evald)
except ValueError:
pass
</code></pre>
<p>And again just pass the dict:</p>
<pre><code>In [10]: my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': \"{'f' : 64}
...: \"}"}
In [11]: reassign(my_dict)
In [12]: my_dict
Out[12]: {'a': 42, 'b': 'my_string', 'c': {'d': 33, 'e': {'f': 64}}}
</code></pre>
<p>And if you want a new dict:</p>
<pre><code>from ast import literal_eval
from copy import deepcopy
def reassign(d):
for k, v in d.items():
try:
evald = literal_eval(v)
if isinstance(evald, dict):
yield k, dict(reassign(evald))
except ValueError:
yield k, deepcopy(v)
</code></pre>
<p>Which will give you a new dict:</p>
<pre><code>In [17]: my_dict = {'a': [1, 2, [3]], 'b': "my_string", 'c': "{'d': 33, 'e': \"{
...: 'f' : 64}\"}"}
In [18]: new = dict(reassign(my_dict))
In [19]: my_dict["a"][-1].append(4)
In [20]: new
Out[20]: {'a': [1, 2, [3]], 'b': 'my_string', 'c': {'d': 33, 'e': {'f': 64}}}
In [21]: my_dict
Out[21]:
{'a': [1, 2, [3, 4]],
'b': 'my_string',
'c': '{\'d\': 33, \'e\': "{\'f\' : 64}"}'}
</code></pre>
<p>You need to make sure to <em>deepcopy</em> objects or you won't get a true independent copy of the dict when you have nested object like the list of lists above.</p>
| 1 | 2016-10-18T14:35:06Z | [
"python",
"string",
"python-3.x",
"dictionary",
"recursion"
] |
How to check if a string contains a dictionary | 40,110,468 | <p>I want to recursively parse all values in a dict that are strings with <code>ast.literal_eval(value)</code> but not do that eval if the string doesn't contain a dict. I want this, because I have a string in a dict that is a dict in itself and I would like the value to be a dict. Best to give an example</p>
<pre><code>my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': 'another string'}"}
</code></pre>
<p>Now I don't want a do to <code>ast.literal_eval(my_dict['c'])</code> I want a generic solution where I can do <code>convert_to_dict(my_dict)</code></p>
<p>I wanted to write my own method, but I don't know how to check if a string contains a dict, and then ast.literal_eval will fail, hence the question.</p>
| 4 | 2016-10-18T14:06:51Z | 40,111,199 | <p>Here is a proposition that handles recursion. As it was suggested in the comments, it tries to eval everything then check if the result is a dict, if it is we recurse, else we skip the value . I sligthly altered the initial dict to show that it hanldes recusion fine :</p>
<pre><code>import ast
my_dict = {'a': 42, 'b': "my_string", 'c': "{'d': 33, 'e': \"{'f' : 64}\"}"}
def recursive_dict_eval(old_dict):
new_dict = old_dict.copy()
for key,value in old_dict.items():
try:
evaled_value=ast.literal_eval(value)
assert isinstance(evaled_value,dict)
new_dict[key]=recursive_dict_eval(evaled_value)
except (SyntaxError, ValueError, AssertionError):
#SyntaxError, ValueError are for the literal_eval exceptions
pass
return new_dict
print(my_dict)
print(recursive_dict_eval(my_dict))
</code></pre>
<p>Output:</p>
<pre><code>{'a': 42, 'b': 'my_string', 'c': '{\'d\': 33, \'e\': "{\'f\' : 64}"}'}
{'a': 42, 'b': 'my_string', 'c': {'e': {'f': 64}, 'd': 33}}
</code></pre>
| 1 | 2016-10-18T14:38:44Z | [
"python",
"string",
"python-3.x",
"dictionary",
"recursion"
] |
Jupyter magic to handle notebook exceptions | 40,110,540 | <p>I have a few long-running experiments in my Jupyter Notebooks. Because I don't know when they will finish, I add an email function to the last cell of the notebook, so I automatically get an email, when the notebook is done.</p>
<p>But when there is a random exception in one of the cells, the whole notebook stops executing and I never get any email. <strong>So I'm wondering if there is some magic function that could execute a function in case of an exception / kernel stop.</strong></p>
<p>Like</p>
<pre><code>def handle_exception(stacktrace):
send_mail_to_myself(stacktrace)
%%in_case_of_notebook_exception handle_exception # <--- this is what I'm looking for
</code></pre>
<p>The other option would be to encapsulate every cell in try-catch, right? But that's soooo tedious.</p>
<p>Thanks in advance for any suggestions.</p>
| 3 | 2016-10-18T14:10:22Z | 40,127,067 | <p>I don't think there is an out-of-the-box way to do that not using a <code>try..except</code> statement in your cells. AFAIK <a href="https://github.com/ipython/ipython/issues/1977" rel="nofollow">a 4 years old issue</a> mentions this, but is still in open status.</p>
<p>However, the <a href="https://github.com/ipython-contrib/jupyter_contrib_nbextensions/tree/master/src/jupyter_contrib_nbextensions/nbextensions/runtools" rel="nofollow">runtools extension</a> may do the trick.</p>
| 1 | 2016-10-19T09:15:22Z | [
"python",
"jupyter",
"jupyter-notebook"
] |
Jupyter magic to handle notebook exceptions | 40,110,540 | <p>I have a few long-running experiments in my Jupyter Notebooks. Because I don't know when they will finish, I add an email function to the last cell of the notebook, so I automatically get an email, when the notebook is done.</p>
<p>But when there is a random exception in one of the cells, the whole notebook stops executing and I never get any email. <strong>So I'm wondering if there is some magic function that could execute a function in case of an exception / kernel stop.</strong></p>
<p>Like</p>
<pre><code>def handle_exception(stacktrace):
send_mail_to_myself(stacktrace)
%%in_case_of_notebook_exception handle_exception # <--- this is what I'm looking for
</code></pre>
<p>The other option would be to encapsulate every cell in try-catch, right? But that's soooo tedious.</p>
<p>Thanks in advance for any suggestions.</p>
| 3 | 2016-10-18T14:10:22Z | 40,128,704 | <p>A such magic command does not exist, but you can write it.</p>
<pre><code>from IPython.core.magic import register_cell_magic
@register_cell_magic
def handle(line, cell):
try:
exec(cell)
except Exception as e:
send_mail_to_myself(e)
</code></pre>
<p>It is not possible to load automatically the magic command for the whole notebook, you have to add it at each cell where you need this feature. </p>
<pre><code>%%handle
some_code()
raise ValueError('this exception will be caught by the magic command')
</code></pre>
| 2 | 2016-10-19T10:26:33Z | [
"python",
"jupyter",
"jupyter-notebook"
] |
Jupyter magic to handle notebook exceptions | 40,110,540 | <p>I have a few long-running experiments in my Jupyter Notebooks. Because I don't know when they will finish, I add an email function to the last cell of the notebook, so I automatically get an email, when the notebook is done.</p>
<p>But when there is a random exception in one of the cells, the whole notebook stops executing and I never get any email. <strong>So I'm wondering if there is some magic function that could execute a function in case of an exception / kernel stop.</strong></p>
<p>Like</p>
<pre><code>def handle_exception(stacktrace):
send_mail_to_myself(stacktrace)
%%in_case_of_notebook_exception handle_exception # <--- this is what I'm looking for
</code></pre>
<p>The other option would be to encapsulate every cell in try-catch, right? But that's soooo tedious.</p>
<p>Thanks in advance for any suggestions.</p>
| 3 | 2016-10-18T14:10:22Z | 40,135,960 | <p>@show0k gave the correct answer to my question (in regards to magic methods). Thanks a lot! :)</p>
<p>That answer inspired me to dig a little deeper and I came across an IPython method that lets you define a <strong>custom exception handler for the whole notebook</strong>.</p>
<p>I got it to work like this:</p>
<pre><code>from IPython.core.ultratb import AutoFormattedTB
# initialize the formatter for making the tracebacks into strings
itb = AutoFormattedTB(mode = 'Plain', tb_offset = 1)
# this function will be called on exceptions in any cell
def custom_exc(shell, etype, evalue, tb, tb_offset=None):
# still show the error within the notebook, don't just swallow it
shell.showtraceback((etype, evalue, tb), tb_offset=tb_offset)
# grab the traceback and make it into a list of strings
stb = itb.structured_traceback(etype, evalue, tb)
sstb = itb.stb2text(stb)
print (sstb) # <--- this is the variable with the traceback string
print ("sending mail")
send_mail_to_myself(sstb)
# this registers a custom exception handler for the whole current notebook
get_ipython().set_custom_exc((Exception,), custom_exc)
</code></pre>
<p>So this can be put into a single cell at the top of any notebook and as a result it will do the mailing in case something goes wrong.</p>
<p>Note to self / TODO: make this snippet into a little python module that can be imported into a notebook and activated via line magic.</p>
<p>Be careful though. The documentation contains a warning for this <code>set_custom_exc</code> method: "WARNING: by putting in your own exception handler into IPythonâs main execution loop, you run a very good chance of nasty crashes. This facility should only be used if you really know what you are doing."</p>
| 0 | 2016-10-19T15:38:44Z | [
"python",
"jupyter",
"jupyter-notebook"
] |
Scraping a webpage that is using a firebase database | 40,110,562 | <p><strong>DISCLAIMER: I'm just learning by doing, I have no bad intentions</strong></p>
<p>So, I would like to fetch the list of the applications listed on this website: <a href="http://roaringapps.com/apps" rel="nofollow">http://roaringapps.com/apps</a></p>
<p>I've done similar things in the past, but with simpler websites; this time I'm having problems getting my hands on the data behind this webpage. </p>
<p>The scrolling from page to page is blazing fast so, to understand how the webpage works, I've fired up a packet sniffer and analyzed the traffic. I've noticed that, after the initial loading, no traffic is exchanged between the server and my client, even if I scroll over 2500 records in the browser. How is that possible?</p>
<p>Anyhow. My understanding is that the website is loading the data from a stream of some sort, and render it via Javascript. Am I correct?</p>
<p>So, I've fired up chromium devtools a looked at the "network" tab, and saw that a WebSocket request is made to the following address: wss://s-usc1c-nss-123.firebaseio.com</p>
<p><a href="https://i.stack.imgur.com/XpR63.png" rel="nofollow"><img src="https://i.stack.imgur.com/XpR63.png" alt="chromium devtool"></a></p>
<p>At this point, after googling a bit, I've tried to query the very same server, using the "v=5&ns=roaringapps" query I saw on the devtools window:</p>
<pre><code>from websocket import create_connection
ws = create_connection('wss://s-usc1c-nss-123.firebaseio.com')
ws.send('v=5&ns=roaringapps')
print json.loads(ws.recv())
</code></pre>
<p>And got this reply:</p>
<pre><code>{u't': u'c', u'd': {u't': u'h', u'd': {u'h': u's-usc1c-nss-123.firebaseio.com', u's': u'JUL5t1nC2SXfGaIjwecB6G13j1OsmMVv', u'ts': 1476799051047L, u'v': u'5'}}}
</code></pre>
<p>I was expecting to see a json response with the raw data about applications & so on. What I'm doing wrong? </p>
<p>Thanks a lot!</p>
<p><strong>UPDATE</strong></p>
<p>Actually, I just found out that the website <em>is</em> using json to load its data. I was not seeing it in iterated requests probably because of caching - but disabling it in chromium did the trick.</p>
| 1 | 2016-10-18T14:11:27Z | 40,112,159 | <p>While the Firebase Database allows you to read/write JSON data. But its SDKs don't simply transfer the raw JSON data, they do many tricks on top of that to ensure an efficient and smooth experience. W</p>
<p>hat you're getting there is Firebase's wire protocol. The protocol is not publicly documented and (if you're new to it) trying to unravel it is going to give you an unpleasant time.</p>
<p>To retrieve the actual JSON at a location, it's easiest to use <a href="https://firebase.google.com/docs/database/rest/start" rel="nofollow">Firebase's REST API</a>. You can get that by simply appending <code>.json</code> to the URL and firing a HTTP GET request against that.</p>
<p>So if the initial data is being loaded from:</p>
<pre><code>https://mynamespace.firebaseio.com/path/to/data
</code></pre>
<p>You'd get the raw JSON by firing a HTTP GET against:</p>
<pre><code>https://mynamespace.firebaseio.com/path/to/data.json
</code></pre>
| 1 | 2016-10-18T15:23:32Z | [
"javascript",
"python",
"web",
"firebase",
"firebase-database"
] |
sqlalchemy.exc.AmbiguousForeignKeysError after Inheritance | 40,110,574 | <p>I'm using <code>sqlacodegen</code> for reflecting a bunch of tables from my database.
And i'm getting the following error:</p>
<blockquote>
<p>sqlalchemy.exc.AmbiguousForeignKeysError: Can't determine join between 'Employee' and 'Sales'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly.</p>
</blockquote>
<p>Here's a simplified version of my tables.
I read in the documentation that I should use the <code>foreign_keys</code> parameter to resolve ambiguity between foreign key targets. Although, I think this problem is because of the inheritance. Could someone help me understand what is going on.</p>
<pre><code># coding: utf-8
from sqlalchemy import Column, ForeignKey, Integer
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import relationship
Base = declarative_base()
class Employee(Base):
__tablename__ = 'Employee'
EmployeeId = Column(Integer, primary_key=True)
class Sales(Employee):
__tablename__ = 'Sales'
EmployeeID = Column(ForeignKey('Employee.EmployeeId'), primary_key=True)
OldemployeeID = Column(ForeignKey('Employee.EmployeeId'))
employee = relationship('Employee', foreign_keys=[EmployeeID])
old_employee = relationship("Employee", foreign_keys=[OldemployeeID])
</code></pre>
| 0 | 2016-10-18T14:12:00Z | 40,128,908 | <p>Just use backref and use Integer on both EmployeeID and OldemployeeID. Otherwise you will get an another error.</p>
<pre><code>class Sales(Employee):
__tablename__ = 'Sales'
EmployeeID = Column(Integer, ForeignKey('Employee.EmployeeId'), primary_key=True)
OldemployeeID = Column(Integer, ForeignKey('Employee.EmployeeId'))
employee = relationship('Employee', foreign_keys=[EmployeeID], backref='Employee')
old_employee = relationship("Employee", foreign_keys=[OldemployeeID], backref='Employee')
</code></pre>
| 0 | 2016-10-19T10:34:36Z | [
"python",
"python-3.x",
"inheritance",
"sqlalchemy",
"sqlacodegen"
] |
Pre-Calculated Objects in Python | 40,110,694 | <p>Is there a way to pre-calculate an object in Python?</p>
<p>Like when you use a constructor just like:</p>
<pre><code>master = Tk()
</code></pre>
<p>Is there a way to pre-calculate and save the object and read it on startup instead of creating it?</p>
<p>My mind is all about saving work, or doing it in advance for the CPU. Oh and if you know a scenario where this is actually done i'd love to hear about it.</p>
| 2 | 2016-10-18T14:17:17Z | 40,111,242 | <p>I think what you're looking for is the <a href="https://docs.python.org/2/library/pickle.html" rel="nofollow"><code>pickle</code> module</a> to serialize an object. In Python 2 there is <code>pickle</code> and <code>cPickle</code>, which is the same but faster, but iirc Python 3 only has <code>pickle</code> (which, under the hood, is equivalent to <code>cPickle</code> from Python 2). This would allow you to save an object with its pre-calculated attributes.</p>
<pre><code>import cPickle as pickle
import time
class some_object(object):
def __init__(self):
self.my_val = sum([x**2 for x in xrange(1000000)])
start = time.time()
obj = some_object()
print "Calculated value = {}".format(obj.my_val)
with open('saved_object.pickle', 'w') as outfile: #Save the object
pickle.dump(obj, outfile)
interim = time.time()
reload_obj = pickle.load(open('saved_object.pickle','r'))
print "Precalculated value = {}".format(reload_obj.my_val)
end = time.time()
print "Creating object took {}".format(interim - start)
print "Reloading object took {}".format(end - interim)
</code></pre>
| 3 | 2016-10-18T14:40:58Z | [
"python",
"optimization"
] |
How can I tell if I have a file-like object? | 40,110,731 | <p>I want to have a function that writes data to a file:</p>
<pre><code>def data_writer(data, file_name):
spiffy_data = data # ...
with open(file_name, 'w') as out:
out.write(spiffy_data)
</code></pre>
<p>But sometimes, I have a file object instead of a file name. In this case, I sometimes have a <code>tempfile.TemporaryFile</code> (which creates a file-like object that's writable).</p>
<p>I'd like to be able to write something like:</p>
<pre><code>def data_writer(data, file_thing):
spiffy_data = data # ...
if type(file_thing) is file_like:
file_thing.write(spiffy_data)
else:
with open(file_name, 'w') as out:
out.write(spiffy_data)
</code></pre>
<p>What's a good way to do this?</p>
<p>Also, does makes sense to do in Python?</p>
| 0 | 2016-10-18T14:18:40Z | 40,110,875 | <p>While your approach is <a href="https://docs.python.org/3/glossary.html#term-lbyl" rel="nofollow"><code>LBYL</code></a>, it's pythonic to assume it's <a href="https://docs.python.org/3/glossary.html#term-eafp" rel="nofollow"><code>EAFP</code></a>. So you could just <code>try</code> to </p>
<ul>
<li><code>write()</code> to the <code>file_thing</code> you received or</li>
<li><code>open()</code> it</li>
</ul>
<p>and <code>except</code> a potential exception, depending on which you feel better represents the default case.</p>
<p><strong>Edit:</strong> Cf <a href="http://stackoverflow.com/questions/40110731/how-can-i-tell-if-i-have-a-file-like-object/40110875#comment67493524_40110875">ShadowRanger's comment</a> for why mixing the exception handling with a context manager is rather unelegant here.</p>
| 0 | 2016-10-18T14:24:45Z | [
"python",
"python-3.x"
] |
How can I tell if I have a file-like object? | 40,110,731 | <p>I want to have a function that writes data to a file:</p>
<pre><code>def data_writer(data, file_name):
spiffy_data = data # ...
with open(file_name, 'w') as out:
out.write(spiffy_data)
</code></pre>
<p>But sometimes, I have a file object instead of a file name. In this case, I sometimes have a <code>tempfile.TemporaryFile</code> (which creates a file-like object that's writable).</p>
<p>I'd like to be able to write something like:</p>
<pre><code>def data_writer(data, file_thing):
spiffy_data = data # ...
if type(file_thing) is file_like:
file_thing.write(spiffy_data)
else:
with open(file_name, 'w') as out:
out.write(spiffy_data)
</code></pre>
<p>What's a good way to do this?</p>
<p>Also, does makes sense to do in Python?</p>
| 0 | 2016-10-18T14:18:40Z | 40,111,061 | <p>A function should do one thing, and do that one thing well. In the case of <code>data_writer</code>, its one thing is to write data to a file-like object. Let the caller worry about providing such an object. That said, you can also provide that caller in the form of a wrapper that takes a file name and opens it for <code>data_writer</code>.</p>
<pre><code>def data_writer(data, file_obj):
spiffy_data = data # ...
file_obj.write(spiffy_data)
def write_data_to_file(data, file_name):
with open(file_name, "w") as f:
data_writer(f, file_name)
</code></pre>
| 3 | 2016-10-18T14:32:28Z | [
"python",
"python-3.x"
] |
Foolproofing a Python calculator | 40,110,778 | <p>I'm writing a basic calculator which works with two different numbers.
So far, I managed to write a working prototype, but while dividing and foolproofing it I ran into a multitude of problems, so I'm posting them
separately. </p>
<hr>
<p>I want the program to repeat the question if the user doesn't provide an eligible operator. That's the code I have now: </p>
<pre><code>def optn_query():
print("Hulk can different things with number!")
print("YOU!")
optn = input("What Hulk do with number?! ")
return optn
</code></pre>
<p><strong>Do I use an if statement to determine if the input is correct?</strong></p>
<hr>
<p>Also I put return optn in there so the next function (gracefully called hulk_math) wouldn't fail midway, but it still does: </p>
<pre><code>Traceback (most recent call last):
File "hulc.py", line 57, in <module>
main()
File "hulc.py", line 13, in main
hulk_math()
File "hulc.py", line 41, in hulk_math
if optn == "+":
NameError: name 'optn' is not defined
</code></pre>
<p>What should I do to fix this?</p>
<p>Here's hulk_math() itself: </p>
<pre><code>def hulk_math():
if optn == "+":
result = num1 + num2
print("Hulk ADDS!!! Hulk thinks it's {0}!".format(result))
elif optn == "-":
result = num1 - num2
print("Hulk SUBTRACTS!!! Hulk thinks it's {0}!".format(result))
elif optn == "*":
result = num1 * num2
print("Hulk MULTIPLIES!!! Hulk thinks it's {0}!".format(result))
elif optn == "/":
result = num1 / num2
print("Hulk DIVIDES!!! Hulk thinks it's {0}!".format(result))
main()
</code></pre>
| 1 | 2016-10-18T14:20:35Z | 40,111,135 | <p>You need to actually call your function:</p>
<pre><code>def hulk_math():
optn = optn_query()
#The rest of your code
</code></pre>
<p>Also, unless <code>num1</code> and <code>num2</code> are defined elsewhere in your code such that they are in the scope of <code>hulk_math</code>, your program is going to fail there too.</p>
| 1 | 2016-10-18T14:35:42Z | [
"python",
"python-3.x",
"object",
"calculator"
] |
Foolproofing a Python calculator | 40,110,778 | <p>I'm writing a basic calculator which works with two different numbers.
So far, I managed to write a working prototype, but while dividing and foolproofing it I ran into a multitude of problems, so I'm posting them
separately. </p>
<hr>
<p>I want the program to repeat the question if the user doesn't provide an eligible operator. That's the code I have now: </p>
<pre><code>def optn_query():
print("Hulk can different things with number!")
print("YOU!")
optn = input("What Hulk do with number?! ")
return optn
</code></pre>
<p><strong>Do I use an if statement to determine if the input is correct?</strong></p>
<hr>
<p>Also I put return optn in there so the next function (gracefully called hulk_math) wouldn't fail midway, but it still does: </p>
<pre><code>Traceback (most recent call last):
File "hulc.py", line 57, in <module>
main()
File "hulc.py", line 13, in main
hulk_math()
File "hulc.py", line 41, in hulk_math
if optn == "+":
NameError: name 'optn' is not defined
</code></pre>
<p>What should I do to fix this?</p>
<p>Here's hulk_math() itself: </p>
<pre><code>def hulk_math():
if optn == "+":
result = num1 + num2
print("Hulk ADDS!!! Hulk thinks it's {0}!".format(result))
elif optn == "-":
result = num1 - num2
print("Hulk SUBTRACTS!!! Hulk thinks it's {0}!".format(result))
elif optn == "*":
result = num1 * num2
print("Hulk MULTIPLIES!!! Hulk thinks it's {0}!".format(result))
elif optn == "/":
result = num1 / num2
print("Hulk DIVIDES!!! Hulk thinks it's {0}!".format(result))
main()
</code></pre>
| 1 | 2016-10-18T14:20:35Z | 40,112,736 | <p>Ok, I fixed it by writing <code>global optn</code> instead of <code>return optn</code>. That way it makes the variable global, so other functions can use it.</p>
| 0 | 2016-10-18T15:49:35Z | [
"python",
"python-3.x",
"object",
"calculator"
] |
Foolproofing a Python calculator | 40,110,778 | <p>I'm writing a basic calculator which works with two different numbers.
So far, I managed to write a working prototype, but while dividing and foolproofing it I ran into a multitude of problems, so I'm posting them
separately. </p>
<hr>
<p>I want the program to repeat the question if the user doesn't provide an eligible operator. That's the code I have now: </p>
<pre><code>def optn_query():
print("Hulk can different things with number!")
print("YOU!")
optn = input("What Hulk do with number?! ")
return optn
</code></pre>
<p><strong>Do I use an if statement to determine if the input is correct?</strong></p>
<hr>
<p>Also I put return optn in there so the next function (gracefully called hulk_math) wouldn't fail midway, but it still does: </p>
<pre><code>Traceback (most recent call last):
File "hulc.py", line 57, in <module>
main()
File "hulc.py", line 13, in main
hulk_math()
File "hulc.py", line 41, in hulk_math
if optn == "+":
NameError: name 'optn' is not defined
</code></pre>
<p>What should I do to fix this?</p>
<p>Here's hulk_math() itself: </p>
<pre><code>def hulk_math():
if optn == "+":
result = num1 + num2
print("Hulk ADDS!!! Hulk thinks it's {0}!".format(result))
elif optn == "-":
result = num1 - num2
print("Hulk SUBTRACTS!!! Hulk thinks it's {0}!".format(result))
elif optn == "*":
result = num1 * num2
print("Hulk MULTIPLIES!!! Hulk thinks it's {0}!".format(result))
elif optn == "/":
result = num1 / num2
print("Hulk DIVIDES!!! Hulk thinks it's {0}!".format(result))
main()
</code></pre>
| 1 | 2016-10-18T14:20:35Z | 40,112,906 | <p>Using <code>global</code> isn't the right way to do this. Pass values from one function to another by saving their return values and passing them as arguments.</p>
<pre><code>def main():
intro()
num1 = num1_query()
optn = optn_query()
num2 = num2_query()
hulk_math(num1, optn, num2)
def hulk_math(num1, optn, num2):
#Your original code will work as expected
</code></pre>
| 0 | 2016-10-18T15:57:39Z | [
"python",
"python-3.x",
"object",
"calculator"
] |
large data transformation in python | 40,110,780 | <p>I have a large data set (ten 12gb csv files) that have 25 columns and would want to transform it to a dataset with 6 columns. the first 3 columns remains the same whereas the 4th one would be the variable names and the rest contains data. Below is my input:</p>
<pre><code>#RIC Date[L] Time[L] Type L1-BidPrice L1-BidSize L1-AskPrice L1-AskSize L2-BidPrice L2-BidSize L2-AskPrice L2-AskSize L3-BidPrice L3-BidSize L3-AskPrice L3-AskSize L4-BidPrice L4-BidSize L4-AskPrice L4-AskSize L5-BidPrice L5-BidSize L5-AskPrice L5-AskSize
HOU.ALP 20150901 30:10.8 Market Depth 5.29 50000 5.3 32000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
HOU.ALP 20150901 30:10.8 Market Depth 5.29 50000 5.3 44000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
HOU.ALP 20150901 30:12.1 Market Depth 5.29 50000 5.3 32000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
HOU.ALP 20150901 30:12.1 Market Depth 5.29 50000 5.3 38000 5.28 50000 5.31 50000 5.27 50000 5.32 50000 5.26 50000 5.33 50000 5.34 50000
</code></pre>
<p>and I would transform it to:</p>
<pre><code>#RIC Date[L] Time[L] level Bid_price bid_volume Ask_price Ask_volume
HOU.ALP 20150901 30:10.8 L1 5.29 50000 5.3 50000
HOU.ALP 20150901 30:10.8 L2 5.28 50000 5.31 50000
HOU.ALP 20150901 30:12.1 L3 5.27 50000 5.32 50000
HOU.ALP 20150901 30:12.1 L4 5.26 50000 5.33 50000
HOU.ALP 20150901 30:12.1 L5
HOU.ALP 20150901 30:12.1 L1 5.29 50000 5.3 50000
HOU.ALP 20150901 30:12.1 L2 5.28 44000 5.31 50000
HOU.ALP 20150901 30:12.1 L3 5.27 48000 5.32 50000
HOU.ALP 20150901 30:12.1 L4 5.26 50000 5.33 50000
</code></pre>
<p>Here is my attempt with the coding. I think I would have to use dictionary to write to a csv file</p>
<pre><code>def depth_data_transformation(input_file_list, output_file):
for file in input_file_list:
file_to_open = '%s.csv' %file
with open(file_to_open) as f, open(output_file, "w") as out:
next(f) # skip header
cols = ["#RIC", "Date[L]", "Time[L]", "level", "Bid_price", "bid_volume", "Ask_price", "Ask_volume"]
wr = csv.writer(out)
wr.writerow(cols)
for row in csv.reader(f):
# get all but first three cols
it = row[4:]
# zip_longest(*[iter(it)] * 4, fillvalue="") -> group into 4's, add empty string for missing values
for ind, t in enumerate(izip_longest(*[iter(it)] * 4, fillvalue=""), 1):
# first 3 cols, level and group all in one row/list.
wr.writerow(row[:3]+ ["l{}".format(ind)] + list(t))
</code></pre>
| 0 | 2016-10-18T14:20:45Z | 40,112,986 | <p>You need to group the levels, i.e <code>L1-BidPrice L1-BidSize L1-AskPrice L1-AskSize</code> and write each to a new row :</p>
<pre><code>import csv
from itertools import zip_longest # izip_longest python2
with open("infile.csv") as f, open("out.csv", "w") as out:
next(f) # skip header
cols = ["#RIC", "Date[L]", "Time[L]", "level", "Bid_price", "bid_volume", "Ask_price", "Ask_volume"]
wr = csv.writer(out)
wr.writerow(cols)
for row in csv.reader(f):
# get all but first three cols.
it = row[4:]
# zip_longest(*[iter(it)] * 4, fillvalue="") -> group into 4's, add empty string for missing values
for ind, t in enumerate(zip_longest(*[iter(it)] * 4, fillvalue=""), 1):
# first 3 cols, level and group all in one row/list.
wr.writerow(row[:3]+ ["l{}".format(ind)] + list(t))
</code></pre>
<p>Which would give you:</p>
<pre><code>#RIC,Date[L],Time[L],level,Bid_price,bid_volume,Ask_price,Ask_volume
HOU.ALP,20150901,30:10.8,l1,5.29,50000,5.3,32000
HOU.ALP,20150901,30:10.8,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:10.8,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:10.8,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:10.8,l5,5.34,50000,,
HOU.ALP,20150901,30:10.8,l1,5.29,50000,5.3,44000
HOU.ALP,20150901,30:10.8,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:10.8,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:10.8,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:10.8,l5,5.34,50000,,
HOU.ALP,20150901,30:12.1,l1,5.29,50000,5.3,32000
HOU.ALP,20150901,30:12.1,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:12.1,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:12.1,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:12.1,l5,5.34,50000,,
HOU.ALP,20150901,30:12.1,l1,5.29,50000,5.3,38000
HOU.ALP,20150901,30:12.1,l2,5.28,50000,5.31,50000
HOU.ALP,20150901,30:12.1,l3,5.27,50000,5.32,50000
HOU.ALP,20150901,30:12.1,l4,5.26,50000,5.33,50000
HOU.ALP,20150901,30:12.1,l5,5.34,50000,,
</code></pre>
<p>In <code>for ind, t in enumerate(zip_longest(*[iter(it)] * 4, fillvalue=""), 1)</code>, <em><code>enumerate</code></em> with a start index of 1 is keeping track of which <em>group/level</em> we are at.</p>
<p><em><code>zip_longest(*[iter(it)] * 4, fillvalue="")</code></em> groups the cols into sections i.e <code>L1-BidPrice,L1-BidSize,L1-AskPrice,L1-AskSize</code>, <code>L2-BidPrice,L2-BidSize,L2-AskPrice,L2-AskSize</code> etc.. all the way to <code>Ln-..</code> </p>
<p>You have <code>HOU.ALP 20150901 30:10.8 L1 5.29 50000 5.3 50000</code> in your expected output but 32000 is the value in your input for <code>L1-AskSize</code>, each row has 5 levels and you also have 8 columns so I presume your expected output is wrong.</p>
| 1 | 2016-10-18T16:02:24Z | [
"python",
"csv"
] |
Python reading in a file and parsing it to variables with whitespaces | 40,110,782 | <p>I have a text file (Students.txt) i need to read into python and parse them into variables first_name, middle_name, last_name, student_id. The first few lines of the text file is as shows: </p>
<pre><code>Last Name Midle Name First Name Student ID
----------------------------------------------
Howard Joe Moe howar1m
Howard Curly howar1c
Fine Ken Lary fine1l
</code></pre>
<p>The code I've tried </p>
<pre><code>f = open("Students.txt")
for line in f:
fields = line.strip().split()
last_name = fields[0]
</code></pre>
<p>Only works for last name, because if I try <code>fields[1]</code> for middle name, I get a "list index out of range" error. I've tried using <code>if not line.startswith('Middle Name'): continue</code> but it doesn't recognize the column.
Does anyone have a better approach as to how to parse these into their respective variables?</p>
| 0 | 2016-10-18T14:20:47Z | 40,111,026 | <p>Probably you should use readlines(), and then loop over each line with skipping first 2 rows. Hence the code will look like</p>
<pre><code>f = open("Student.txt")
lines = f.readlines()
print lines
for line in lines[2:]:
fields = line.split()
last_name = fields[0]
middle_name = fields[1]
</code></pre>
| 1 | 2016-10-18T14:30:44Z | [
"python",
"parsing"
] |
Python backtest using percentage based commission | 40,110,800 | <p>I'm writing a script to backtest some strategies for a set of stocks using the bt framework for python. In the bt documentation (<a href="http://pmorissette.github.io/bt/bt.html#module-bt.backtest" rel="nofollow">backtest module</a>) it says: </p>
<blockquote>
<p>commission (fn(quantity)): The commission function to be used.</p>
</blockquote>
<p>So when I run my code </p>
<pre><code>result = bt.Backtest(strategy, data, initial_capital= 100000.00, commissions=)
</code></pre>
<p>I want to pass a function that returns a percentage based commission e.g. 0.5 % of the transaction. Since I don't know the size of the transactions, is this even possible? How would it otherwise be solved, using a set commission?</p>
| 0 | 2016-10-18T14:21:30Z | 40,128,258 | <p>Solved it by creating a function with parameters for quantity and price. Thus it was easy returning a percentage based on the transaction cost as follows:</p>
<pre><code>def my_comm(q, p):
return abs(q)*p*0.0025
</code></pre>
| 0 | 2016-10-19T10:07:21Z | [
"python",
"pandas",
"stocks",
"back-testing"
] |
How can we simulate pass by reference in python? | 40,110,812 | <p>Let's say we have a function <code>foo()</code></p>
<pre><code>def foo():
foo.a = 2
foo.a = 1
foo()
>> foo.a
>> 2
</code></pre>
<p>Is this pythonic or should I wrap the variable in mutable objects such as a list?</p>
<p>Eg:</p>
<pre><code>a = [1]
def foo(a):
a[0] = 2
foo()
>> a
>> 2
</code></pre>
| 0 | 2016-10-18T14:21:59Z | 40,111,008 | <p>Since you "want to mutate the variable so that the changes are effected in global scope as well" use the <code>global</code> keyword to tell your function that the name <code>a</code> is a global variable. This means that any assignment to <code>a</code> inside of your function affects the global scope. Without the <code>global</code> declaration assignment to <code>a</code> in your function would create a new local variable.</p>
<pre><code>>>> a = 0
>>> def foo():
... global a
... a = 1
...
>>> foo()
>>> a
1
</code></pre>
| 1 | 2016-10-18T14:29:52Z | [
"python",
"pass-by-reference"
] |
How can we simulate pass by reference in python? | 40,110,812 | <p>Let's say we have a function <code>foo()</code></p>
<pre><code>def foo():
foo.a = 2
foo.a = 1
foo()
>> foo.a
>> 2
</code></pre>
<p>Is this pythonic or should I wrap the variable in mutable objects such as a list?</p>
<p>Eg:</p>
<pre><code>a = [1]
def foo(a):
a[0] = 2
foo()
>> a
>> 2
</code></pre>
| 0 | 2016-10-18T14:21:59Z | 40,111,328 | <p>Use a class (maybe a bit overkill):</p>
<pre><code>class Foo:
def __init__(self):
a = 0
def bar(f):
f.a = 2
foo = Foo()
foo.a = 1
bar(foo)
print(foo.a)
</code></pre>
| 0 | 2016-10-18T14:44:31Z | [
"python",
"pass-by-reference"
] |
AttributeError 'nonetype' object has no attribute 'recv' | 40,110,816 | <p>First of all I need to say I've never tried coding in python before... </p>
<p>I'm trying to make a Twitch IRC bot working but I keep failing... </p>
<p>My bot.py code looks like this: </p>
<pre><code>from src.lib import irc as irc_
from src.lib import functions_general
from src.lib import functions_commands as commands
from src.config import config
class PartyMachine:
def __init__(self, config):
self.config = config
self.irc = irc_.irc(config)
self.socket = self.irc.get_irc_socket_object()
def sock(self):
irc = self.irc
sock = self.socket
config = self.config
kage = sock
while True:
data = sock.recv(2048).rstrip()
if len(data) == 0:
pp('Connection was lost, reconnecting.')
sock = self.irc.get_irc_socket_object()
if config['debug']:
print (data)
</code></pre>
<p>my config.py is here:</p>
<pre><code>'socket_buffer_size': 1024
</code></pre>
<p>My irc.py is here:</p>
<pre><code>def get_irc_socket_object(self):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.settimeout(10)
self.sock = sock
try:
sock.connect((self.config['server'], self.config['port']))
except:
pp('Cannot connect to server (%s:%s).' % (self.config['server'], self.config['port']), 'error')
sys.exit()
sock.settimeout(None)
def sock_send(sock, send, self):
sock.send('USER %s\r\n' % self.config['username'], sock.encode('utf-8'), send.encode('utf-8'))
sock.send('PASS %s\r\n' % self.config['oauth_password'])
sock.send('NICK %s\r\n' % self.config['username'])
if self.check_login_status(sock.recv(1024)):
pp('Login successful.')
else:
pp('Login unsuccessful. (hint: make sure your oauth token is set in self.config/self.config.py).', 'error')
sys.exit()
</code></pre>
<p>and my serve.py is here:</p>
<pre><code>from sys import argv
from src.bot import *
from src.config.config import *
bot = PartyMachine(config).sock()
</code></pre>
<p>It keeps failing with "<code>AttributeError 'nonetype' object has no attribute 'recv'</code>". How can this be ? </p>
| 0 | 2016-10-18T14:22:12Z | 40,111,748 | <p>Your <code>get_irc_socket_object(self)</code> might be the problem. You call it with the line <code>self.socket = self.irc.get_irc_socket_object()</code>. This means that python expects the function <code>get_irc_socket_object(self)</code> to return a socket object, but you don't return anything (you just write <code>self.sock = sock</code>, which doesn't do anything because you use <code>self.socket</code> for the rest of your code). As a result, the function returns <code>none</code>, so now <code>self.socket</code> just has that as its value. Therefore, when you make the call to recv you get your error</p>
<p>Also, please clean up your variable names. Sock is used way too often in your code and makes it very hard to follow.</p>
| 0 | 2016-10-18T15:01:52Z | [
"python",
"python-3.x"
] |
How to import a module from a different directory and have it look for files in that directory | 40,110,847 | <p>I'm trying to import a Python module in the directory <code>/home/kurt/dev/clones/ipercron-utils/tester</code>. This directory contains a <code>tester.py</code> and a <code>config.yml</code> file. The <code>tester.py</code> includes the (leading) line</p>
<pre><code>config = yaml.safe_load(open("config.yml"))
</code></pre>
<p>Now, from another directory, I try to import it like so:</p>
<pre><code>import sys
sys.path.insert(0, "/home/kurt/dev/clones/ipercron-utils/tester")
import tester
</code></pre>
<p>However, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/kurt/dev/clones/ipercron-compose/controller/controller_debug2.py", line 9, in <module>
import tester
File "/home/kurt/dev/clones/ipercron-utils/tester/tester.py", line 28, in <module>
config = yaml.safe_load(open("config.yml"))
IOError: [Errno 2] No such file or directory: 'config.yml'
</code></pre>
<p>As I understand it, Python is looking for the <code>config.yml</code> file in the current directory (<code>/home/kurt/dev/clones/ipercron-compose/controller</code>) whereas I want it to look in the directory the module was imported from (<code>/home/kurt/dev/clones/ipercron-utils/tester</code>). Is there any way to specify this?</p>
| 2 | 2016-10-18T14:23:35Z | 40,110,909 | <p><code>__file__</code> always contains the current module filepath (here <code>/home/kurt/dev/clones/ipercron-utils/tester/tester.py</code>).</p>
<p>Just perform a <code>dirname</code> on it => you have the path which contains your <code>yml</code> configuration file.</p>
<p>code it like this in your <code>tester.py</code> module (<code>import os</code> if not already done):</p>
<pre><code>module_dir = os.path.dirname(__file__)
config = yaml.safe_load(open(os.path.join(module_dir,"config.yml")))
</code></pre>
<p>side note: <code>__file__</code> doesn't work on the main file when the code is "compiled" using py2exe. In that case you have to do:</p>
<pre><code>module_dir = os.path.dirname(sys.executable)
</code></pre>
| 2 | 2016-10-18T14:26:04Z | [
"python"
] |
update threaded tkinter gui | 40,110,907 | <p>I have a small display connected to my pi.
Now I have a Python script that measures the time between two events of the gpio headers.
I want to display this time (the script to get this time is working perfectly). For that I created a <code>tkinter</code> window.
There, I have a label that should display this time.
I have threaded the gui function to make it possible for the program to still listen to the GPIO pin.</p>
<pre><code>def guiFunc():
gui = Tk()
gui.title("Test")
gui.geometry("500x200")
app = Frame(gui)
app.grid()
beattime = Label(app, text = "test")
beattime.grid()
gui.mainloop()
gui_thread = threading.Thread(target = guiFunc)
gui_thread.start()
while True:
time.sleep(.01)
if (GPIO.input(3)):
time = trigger() #trigger is the function to trigger the 'stopwatch'
global beattime
beattime['text'] = str(time)
while GPIO.input(3): #'wait' for btn to release (is there a better way?)
print "btn_pressed"
</code></pre>
<p>So the program isn't doing anything since I added these lines:</p>
<pre><code>global beattime
beattime['text'] = str(time)
</code></pre>
<p>What am I doing wrong?</p>
| 0 | 2016-10-18T14:26:01Z | 40,111,304 | <p>Use <code>tkinter.StringVar</code></p>
<pre><code># omitting lines
global timevar
timevar = StringVar()
timevar.set("Test")
beattime = Label(app, textvariable=timevar)
# omitting lines
#changing the text:
while True:
time.sleep(.01)
if (GPIO.input(3)):
time = trigger() #trigger is the function to trigger the 'stopwatch'
timevar.set(str(time))
root.update() #just in case
while GPIO.input(3): #'wait' for btn to release (is there a better way?)
print "btn_pressed"
</code></pre>
<p>And you should run the gui in your main thread. Its not recommended to call gui calls from different threads.</p>
| 0 | 2016-10-18T14:43:16Z | [
"python",
"multithreading",
"user-interface",
"tkinter",
"gpio"
] |
dictionary to pandas DataFrame | 40,111,091 | <p>I have this dictionary:</p>
<pre><code>diccionario = {'Monetarios':['B1','B2'],
'Monetario Dinamico':['B1','B2'],
'Renta fija corto plazo':['B1','B2'],
'Garantizados de RF':['B1','B2'],
'Renta Fija Largo Plazo':['B2','B3'],
'Garantizados de RV':['B2','B3'],
'Mixtos Renta Fija':['B2','B3'],
'Mixtos Renta Variable':['B3','B4'],
'Renta Variable':['B3','B4'],
'Alternativos':['B3','B4'],
'Fondos Inmobiliarios en Directo':['G3','G3'],
'IIC de Inversion Libre':['G4','G4'],
'IIC de IIC de Inversion Libre':['G4','G4'],
'Money Markets':['B1','B2'],
'Money Markets Enhanced':['B1','B2'],
'Fixed Income Short Term':['B1','B2'],
'Capital Guaranteed Funds':['B1','B2'],
'Fixed Income Long Term':['B2','B3'],
'Capital Guaranteed Equity Funds':['B2','B3'],
'Mixed Fixed Income Funds':['B2','B3'],
'Mixed Equity Funds':['B3','B4'],
'Equity':['B3','B4'],
'Alternatives':['B3','B4'],
'Real State':['G3','G4'],
'Hedge Funds':['G4','G4'],
'Funds of Hedge Funds':['G4','G4'],
'HARMONIZED':'G4',
'HIGH_YLD_EMERGING_MARKETS':'B4'
}
</code></pre>
<p>And i want a data frame with the words i am using as keys as the first column and the values assigned to those keys as other columns, like this:</p>
<pre><code>col 1 col 2 col 3
Monetarios B1 B2
Monetar din. B1 B2
Rent fija... B1 B2
</code></pre>
<p>...
...</p>
<p>I ve just got the first colum with this:
df_dict = pd.DataFrame(diccionario)</p>
<pre><code>k3 = list(df_dict.columns.values)
</code></pre>
<p>thanks in advance</p>
| 2 | 2016-10-18T14:33:57Z | 40,111,152 | <p>I think you can use transpose by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.T.html" rel="nofollow"><code>T</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>:</p>
<pre><code>df = pd.DataFrame.from_dict(diccionario).T.reset_index()
df.columns = ['col1','col2','col3']
print (df)
col1 col2 col3
0 Alternatives B3 B4
1 Alternativos B3 B4
2 Capital Guaranteed Equity Funds B2 B3
3 Capital Guaranteed Funds B1 B2
4 Equity B3 B4
5 Fixed Income Long Term B2 B3
6 Fixed Income Short Term B1 B2
7 Fondos Inmobiliarios en Directo G3 G3
8 Funds of Hedge Funds G4 G4
9 Garantizados de RF B1 B2
10 Garantizados de RV B2 B3
11 HARMONIZED G4 G4
12 HIGH_YLD_EMERGING_MARKETS B4 B4
13 Hedge Funds G4 G4
14 IIC de IIC de Inversion Libre G4 G4
15 IIC de Inversion Libre G4 G4
16 Mixed Equity Funds B3 B4
17 Mixed Fixed Income Funds B2 B3
18 Mixtos Renta Fija B2 B3
19 Mixtos Renta Variable B3 B4
20 Monetario Dinamico B1 B2
21 Monetarios B1 B2
22 Money Markets B1 B2
23 Money Markets Enhanced B1 B2
24 Real State G3 G4
25 Renta Fija Largo Plazo B2 B3
26 Renta Variable B3 B4
27 Renta fija corto plazo B1 B2
</code></pre>
| 1 | 2016-10-18T14:36:24Z | [
"python",
"pandas",
"dictionary",
"dataframe"
] |
dictionary convert to list and sort causing error [python2.7] | 40,111,271 | <p>I have a dictionary which is a histogram of different hours in a day:</p>
<blockquote>
<p>{'11': 6, '10': 3, '15': 2, '14': 1, '04': 3, '16': 4, '19': 1, '18':
1, '09': 2, '17': 2, '06': 1, '07': 1}</p>
</blockquote>
<p>and I want to sort the dictionary based on the hours(first item) and produce something like:</p>
<blockquote>
<p>('04', 3), ('06', 1), ('07', 1), ('09', 2), ('10', 3), ('11', 6),
('14', 1), ('15', 2), ('16', 4), ('17', 2), ('18', 1), ('19', 1)</p>
</blockquote>
<p>I tried hours = list(dict.items()) and it works pretty well, but when I tried earlier</p>
<pre><code>for hour, freq in dict:
count = (hour, freq)
lst.append(count)
lst.sort()
print lst
</code></pre>
<p>I get</p>
<blockquote>
<p>('0', '4'), ('0', '6'), ('0', '7'), ('0', '9'), ('1', '0'), ('1',
'1'), ('1', '4'), ('1', '5'), ('1', '6'), ('1', '7'), ('1', '8'),
('1', '9')</p>
</blockquote>
<p>It seems like only the first digit of the hours are recorded, but I don't know why. The for loop worked well when I was counting frequency of a character in a given string. Can somebody please help me explain this? Thanks a lot.</p>
| 0 | 2016-10-18T14:42:24Z | 40,111,348 | <p>When you do <code>for ... in d</code>, the <code>dict</code> only iterates over the keys in the dictionary, not the values. Instead of getting a tuple of <code>hour, count</code>, you're getting a two-character string that's being split into <code>first_character, second_character</code> Not that each of the pairs in the last output of your question are actually the keys split into tuples.</p>
| 0 | 2016-10-18T14:45:23Z | [
"python",
"list",
"python-2.7",
"dictionary"
] |
dictionary convert to list and sort causing error [python2.7] | 40,111,271 | <p>I have a dictionary which is a histogram of different hours in a day:</p>
<blockquote>
<p>{'11': 6, '10': 3, '15': 2, '14': 1, '04': 3, '16': 4, '19': 1, '18':
1, '09': 2, '17': 2, '06': 1, '07': 1}</p>
</blockquote>
<p>and I want to sort the dictionary based on the hours(first item) and produce something like:</p>
<blockquote>
<p>('04', 3), ('06', 1), ('07', 1), ('09', 2), ('10', 3), ('11', 6),
('14', 1), ('15', 2), ('16', 4), ('17', 2), ('18', 1), ('19', 1)</p>
</blockquote>
<p>I tried hours = list(dict.items()) and it works pretty well, but when I tried earlier</p>
<pre><code>for hour, freq in dict:
count = (hour, freq)
lst.append(count)
lst.sort()
print lst
</code></pre>
<p>I get</p>
<blockquote>
<p>('0', '4'), ('0', '6'), ('0', '7'), ('0', '9'), ('1', '0'), ('1',
'1'), ('1', '4'), ('1', '5'), ('1', '6'), ('1', '7'), ('1', '8'),
('1', '9')</p>
</blockquote>
<p>It seems like only the first digit of the hours are recorded, but I don't know why. The for loop worked well when I was counting frequency of a character in a given string. Can somebody please help me explain this? Thanks a lot.</p>
| 0 | 2016-10-18T14:42:24Z | 40,111,412 | <p>You can try something like this:</p>
<pre><code>d = {'11': 6, '10': 3, '15': 2, '14': 1, '04': 3, '16': 4, '19': 1, '18': 1, '09': 2, '17': 2, '06': 1, '07': 1}
l = list(d.iteritems())
l.sort()
print l
</code></pre>
<p>The output is:</p>
<pre><code>[('04', 3), ('06', 1), ('07', 1), ('09', 2), ('10', 3), ('11', 6), ('14', 1), ('15', 2), ('16', 4), ('17', 2), ('18', 1), ('19',1)]
</code></pre>
| 1 | 2016-10-18T14:47:42Z | [
"python",
"list",
"python-2.7",
"dictionary"
] |
dictionary convert to list and sort causing error [python2.7] | 40,111,271 | <p>I have a dictionary which is a histogram of different hours in a day:</p>
<blockquote>
<p>{'11': 6, '10': 3, '15': 2, '14': 1, '04': 3, '16': 4, '19': 1, '18':
1, '09': 2, '17': 2, '06': 1, '07': 1}</p>
</blockquote>
<p>and I want to sort the dictionary based on the hours(first item) and produce something like:</p>
<blockquote>
<p>('04', 3), ('06', 1), ('07', 1), ('09', 2), ('10', 3), ('11', 6),
('14', 1), ('15', 2), ('16', 4), ('17', 2), ('18', 1), ('19', 1)</p>
</blockquote>
<p>I tried hours = list(dict.items()) and it works pretty well, but when I tried earlier</p>
<pre><code>for hour, freq in dict:
count = (hour, freq)
lst.append(count)
lst.sort()
print lst
</code></pre>
<p>I get</p>
<blockquote>
<p>('0', '4'), ('0', '6'), ('0', '7'), ('0', '9'), ('1', '0'), ('1',
'1'), ('1', '4'), ('1', '5'), ('1', '6'), ('1', '7'), ('1', '8'),
('1', '9')</p>
</blockquote>
<p>It seems like only the first digit of the hours are recorded, but I don't know why. The for loop worked well when I was counting frequency of a character in a given string. Can somebody please help me explain this? Thanks a lot.</p>
| 0 | 2016-10-18T14:42:24Z | 40,111,429 | <p>You'll want to use <code>iteritems</code></p>
<pre><code>for hour, value in dict.iteritems():
etc.
</code></pre>
<p>Of course, you shouldn't use the name 'dict' -- rename it to something else since it's already the name of the <code>dict</code> class. You can also use <code>items()</code> if you want -- it takes more space and is slightly faster but it shouldn't matter if you're using small amounts of data.</p>
| 1 | 2016-10-18T14:48:27Z | [
"python",
"list",
"python-2.7",
"dictionary"
] |
dictionary convert to list and sort causing error [python2.7] | 40,111,271 | <p>I have a dictionary which is a histogram of different hours in a day:</p>
<blockquote>
<p>{'11': 6, '10': 3, '15': 2, '14': 1, '04': 3, '16': 4, '19': 1, '18':
1, '09': 2, '17': 2, '06': 1, '07': 1}</p>
</blockquote>
<p>and I want to sort the dictionary based on the hours(first item) and produce something like:</p>
<blockquote>
<p>('04', 3), ('06', 1), ('07', 1), ('09', 2), ('10', 3), ('11', 6),
('14', 1), ('15', 2), ('16', 4), ('17', 2), ('18', 1), ('19', 1)</p>
</blockquote>
<p>I tried hours = list(dict.items()) and it works pretty well, but when I tried earlier</p>
<pre><code>for hour, freq in dict:
count = (hour, freq)
lst.append(count)
lst.sort()
print lst
</code></pre>
<p>I get</p>
<blockquote>
<p>('0', '4'), ('0', '6'), ('0', '7'), ('0', '9'), ('1', '0'), ('1',
'1'), ('1', '4'), ('1', '5'), ('1', '6'), ('1', '7'), ('1', '8'),
('1', '9')</p>
</blockquote>
<p>It seems like only the first digit of the hours are recorded, but I don't know why. The for loop worked well when I was counting frequency of a character in a given string. Can somebody please help me explain this? Thanks a lot.</p>
| 0 | 2016-10-18T14:42:24Z | 40,111,461 | <p>Go through both keys and values using <code>dict.items()</code>. Combine it with a list comprehension to get the list you want and finally sort it.</p>
<pre><code>a = {'11': 6, '10': 3, '15': 2, '14': 1, '04': 3, '16': 4, '19': 1, '18': 1, '09': 2, '17': 2, '06': 1, '07': 1}
b = sorted([(x, y) for x, y in a.items()])
print(b)
# prints
# [('04', 3), ('06', 1), ('07', 1), ('09', 2), ('10', 3), ('11', 6), ('14', 1), ('15', 2), ('16', 4), ('17', 2), ('18', 1), ('19', 1)]
</code></pre>
| 0 | 2016-10-18T14:49:57Z | [
"python",
"list",
"python-2.7",
"dictionary"
] |
dictionary convert to list and sort causing error [python2.7] | 40,111,271 | <p>I have a dictionary which is a histogram of different hours in a day:</p>
<blockquote>
<p>{'11': 6, '10': 3, '15': 2, '14': 1, '04': 3, '16': 4, '19': 1, '18':
1, '09': 2, '17': 2, '06': 1, '07': 1}</p>
</blockquote>
<p>and I want to sort the dictionary based on the hours(first item) and produce something like:</p>
<blockquote>
<p>('04', 3), ('06', 1), ('07', 1), ('09', 2), ('10', 3), ('11', 6),
('14', 1), ('15', 2), ('16', 4), ('17', 2), ('18', 1), ('19', 1)</p>
</blockquote>
<p>I tried hours = list(dict.items()) and it works pretty well, but when I tried earlier</p>
<pre><code>for hour, freq in dict:
count = (hour, freq)
lst.append(count)
lst.sort()
print lst
</code></pre>
<p>I get</p>
<blockquote>
<p>('0', '4'), ('0', '6'), ('0', '7'), ('0', '9'), ('1', '0'), ('1',
'1'), ('1', '4'), ('1', '5'), ('1', '6'), ('1', '7'), ('1', '8'),
('1', '9')</p>
</blockquote>
<p>It seems like only the first digit of the hours are recorded, but I don't know why. The for loop worked well when I was counting frequency of a character in a given string. Can somebody please help me explain this? Thanks a lot.</p>
| 0 | 2016-10-18T14:42:24Z | 40,111,489 | <p>I think you are simply iterating over dictionary keys instead of pairs of key and value.</p>
<p>Have a look at this post:
<a href="http://stackoverflow.com/questions/3294889/iterating-over-dictionaries-using-for-loops-in-python">Iterating over dictionaries using for loops in Python</a></p>
<p>Your code should work if updated as below:</p>
<pre><code>for hour, freq in dict.iteritems():
count = (hour, freq)
lst.append(count)
lst.sort()
</code></pre>
| 1 | 2016-10-18T14:50:53Z | [
"python",
"list",
"python-2.7",
"dictionary"
] |
How to Convert URLField in a Image on Django | 40,111,431 | <p>I have kept the url of an image on a URLField, such that as well as I can display the image on the model of my home page and not the string of the URL, I can not convert that direction, "www.google.es/images/car "in an image of a car?</p>
<p>models.py</p>
<pre><code>Photo class (models.Model):
    name = models.CharField (max_length = 150)
    url = models.URLField ()
    __unicode def __ (self): # 0 parameters
        return self.name
</code></pre>
<p>views.py</p>
<pre><code>def home (request):
    photos = Photo.objects.all ()
    html = '<ul>'
    for photo in photos:
        html + = '<li>' + photo.url + '</ li>'
    html + = '</ ul>'
    return HttpResponse (html)
</code></pre>
<p>How could i convert in a imagen <code>photo.url</code> into a Image.</p>
| 0 | 2016-10-18T14:48:34Z | 40,111,458 | <p>You cannot convert the string into image however you can use the url in <code>src</code> attribute inside <code><img></code> tag</p>
<pre><code>for photo in photos:
html + = '<li><img src="' + photo.url + '"></ li>'
</code></pre>
<p>Remember that you are generating HTML for the website not rendering it's content</p>
| 1 | 2016-10-18T14:49:54Z | [
"python",
"django"
] |
How to Convert URLField in a Image on Django | 40,111,431 | <p>I have kept the url of an image on a URLField, such that as well as I can display the image on the model of my home page and not the string of the URL, I can not convert that direction, "www.google.es/images/car "in an image of a car?</p>
<p>models.py</p>
<pre><code>Photo class (models.Model):
    name = models.CharField (max_length = 150)
    url = models.URLField ()
    __unicode def __ (self): # 0 parameters
        return self.name
</code></pre>
<p>views.py</p>
<pre><code>def home (request):
    photos = Photo.objects.all ()
    html = '<ul>'
    for photo in photos:
        html + = '<li>' + photo.url + '</ li>'
    html + = '</ ul>'
    return HttpResponse (html)
</code></pre>
<p>How could i convert in a imagen <code>photo.url</code> into a Image.</p>
| 0 | 2016-10-18T14:48:34Z | 40,111,507 | <p>Try:</p>
<pre><code>def home (request):
photos = Photo.objects.all ()
html = '<ul>'
for photo in photos:
html + = '<li><img src="{}"><li>'.format(photo.url)
html + = '</ ul>'
return HttpResponse (html)
</code></pre>
<p>Perhaps even better, again relying on formatting instead of string concatenation:</p>
<pre><code>def home (request):
photos = Photo.objects.all ()
html = '<ul>{}</ul>'
html_photos = []
for photo in photos:
html_photos.append('<li><img src="{}"><li>'.format(photo.url))
html = html.format("\n".join(html_photos))
return HttpResponse (html)
</code></pre>
| 0 | 2016-10-18T14:51:40Z | [
"python",
"django"
] |
GeoTIFF issue with opening in PIL | 40,111,453 | <p>Everytime I open an GeoTIFF image of a orthophoto in python (tried PIL, matplotlib, scipy, openCV) the image screws up. Some corners are beeing cropped , however the image remains its original shape. If I manually convert the tif to for instance a png in Photoshop and load it, it does work correctly. So it seems like PIL has some trouble handling tif files with objects that not fill the entire canvas. Does anyone have a solution for this problem?</p>
<p><em>Part of original Image:</em></p>
<p><img src="https://i.stack.imgur.com/uSjKlm.jpg" alt="Part of original Image"></p>
<p><em>After opening:</em></p>
<p><img src="https://i.stack.imgur.com/wUVk9m.jpg" alt="After opening"></p>
| 0 | 2016-10-18T14:49:31Z | 40,128,829 | <p>It would've been really nice if you put the link of the figure that you are using (if it's free). I downloaded a sample geotiff image from <a href="http://eoimages.gsfc.nasa.gov/images/imagerecords/57000/57752/land_shallow_topo_2048.tif" rel="nofollow">here</a>, and I used <a href="https://pypi.python.org/pypi/GDAL/" rel="nofollow">gdal</a> to open it.</p>
<p>The shape of the <code>geotiff.ReadAsArray()</code> is <code>(3, 1024, 2048)</code> so I convert it to <code>(1024, 2048, 3)</code> (RGB) and open it with <code>imshow</code>: </p>
<p>import gdal
gdal.UseExceptions()
import matplotlib.pyplot as plt
import numpy as np</p>
<pre><code>geotiff = gdal.Open('/home/vafanda/Downloads/test.tif')
geotiff_arr= geotiff.ReadAsArray()
print np.shape(geotiff_arr)
geotiff_shifted = np.rollaxis(geotiff_arr,0,3)
print "Dimension converted to: "
print np.shape(geotiff_shifted)
plt.imshow(geotiff_shifted)
plt.show()
</code></pre>
<p>result: </p>
<p><a href="https://i.stack.imgur.com/JApVV.png" rel="nofollow"><img src="https://i.stack.imgur.com/JApVV.png" alt="enter image description here"></a></p>
| 0 | 2016-10-19T10:31:26Z | [
"python",
"image",
"scipy",
"tiff",
"geotiff"
] |
pandas - agg() function | 40,111,546 | <p>The ordering of my age, height and weight columns is changing with each run of the code. I need to keep the order of my agg columns static because I ultimately refer to this output file according to the column locations. What can I do to make sure age, height and weight are output in the same order every time?</p>
<pre><code>d = pd.read_csv(input_file, na_values=[''])
df = pd.DataFrame(d)
df.index_col = ['name', 'address']
df_out = df.groupby(df.index_col).agg({'age':np.mean, 'height':np.sum, 'weight':np.sum})
df_out.to_csv(output_file, sep=',')
</code></pre>
| 1 | 2016-10-18T14:53:11Z | 40,111,581 | <p>I think you can use subset:</p>
<pre><code>df_out = df.groupby(df.index_col)
.agg({'age':np.mean, 'height':np.sum, 'weight':np.sum})[['age','height','weight']]
</code></pre>
<p>Also you can use <code>pandas</code> functions:</p>
<pre><code>df_out = df.groupby(df.index_col)
.agg({'age':'mean', 'height':sum, 'weight':sum})[['age','height','weight']]
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'name':['q','q','a','a'],
'address':['a','a','s','s'],
'age':[7,8,9,10],
'height':[1,3,5,7],
'weight':[5,3,6,8]})
print (df)
address age height name weight
0 a 7 1 q 5
1 a 8 3 q 3
2 s 9 5 a 6
3 s 10 7 a 8
df.index_col = ['name', 'address']
df_out = df.groupby(df.index_col)
.agg({'age':'mean', 'height':sum, 'weight':sum})[['age','height','weight']]
print (df_out)
age height weight
name address
a s 9.5 12 14
q a 7.5 4 8
</code></pre>
| 0 | 2016-10-18T14:54:33Z | [
"python",
"pandas",
"format"
] |
install quandl in pycharm | 40,111,605 | <p>I am trying to install quandl in PyCharm. I am trying to do this by going into project interpreter clicking the "+" button and then selecting Quandl. I am getting the following error.</p>
<p>OSError: [Errno 13] Permission denied: '/Users/raysabbineni/Library/Python/2.7'</p>
<p>I have installed pandas and sklearn in the above way so I'm not sure what the error with quandl is. </p>
| 0 | 2016-10-18T14:55:20Z | 40,111,939 | <p>try with sudo pip install (your package) on the terminal
sudo pip install quandl
Or
Sudo easy_install quandl</p>
| 1 | 2016-10-18T15:12:03Z | [
"python",
"pycharm",
"quandl"
] |
Yahoo finance API missing data for certain days | 40,111,621 | <p>I'm coding a script that fetches information from the Yahoo finance API and even though the API is pretty slow it works for what I'm going to use it for. During testing of the script I found out that I got an IndexOutOfBounds exception and up on investigation I see that Yahoo Finance is returning stock quote information for the stocks I have except that its missing one day for one of the stocks, and I suspect that when using a wider time period it will miss more days as I have got that exception before when using a wider time period, but I thought it was something in my code that I could just fix later.</p>
<p>That the Yahoo finance API is missing entire days of stock quote information makes the API useless for me. Has anyone else experienced this, and is there any solution to it ? I'm guessing I would need to use another way to get the data.</p>
<p>Right now I'm using this python module <a href="https://pypi.python.org/pypi/yahoo-finance" rel="nofollow">https://pypi.python.org/pypi/yahoo-finance</a>.</p>
<p>Yahoo finance is the only API I've currently found that contains the information I need and support the stock exchange I need to query data for. </p>
<p>Update:
Yes, I can re-produce the problem. Below is the code to re-produce:</p>
<pre><code>>>> import datetime as dt
>>> import yahoo_finance as yf
>>>
>>> quote = yf.Share('GJF.OL')
>>> date_from = str(dt.date.today() - dt.timedelta(days=5))
>>> date_to = str(dt.date.today())
>>> quote_his = quote.get_historical(date_from, date_to)
>>> import pprint
>>> pprint.pprint(quote_his)
[{'Adj_Close': '156.50',
'Close': '156.50',
'Date': '2016-10-14',
'High': '156.50',
'Low': '153.10',
'Open': '153.50',
'Symbol': 'GJF.OL',
'Volume': '487600'},
{'Adj_Close': '153.60',
'Close': '153.60',
'Date': '2016-10-13',
'High': '153.60',
'Low': '152.50',
'Open': '153.30',
'Symbol': 'GJF.OL',
'Volume': '508800'}]
>>>
</code></pre>
<p>This code should print out the stock information for Monday (2016-10-17), but it does not. If I choose another stock I get the stock information for Monday in the dictionary as well.</p>
<p>Update 2:
I tried another module named ystockquote and got the same result. I get information for thursday and friday, but not monday. If i ask for a different quote i get info from all three days. When i got to the yahoo finance site it has the stock information from monday in graphs etc.</p>
<p>Update 3:
The data is now found for the GJF.OL which probably would have been related to a delay in the stock prices for the historical tables in the API as pointed out below in an answer. However, I was still able to receive stock price information from other stocks for the dates I did not receive stock price information from for the GJF.OL stock.</p>
<p>While I'm now receiving stock price information for the GJF.OL stock I tried to get the last 165 days of stock price information from the stocks, but there is 1 day missing from the NAS.OL stock meaning the dictionary returned does not contain any data for that day while the other stocks have that information. The stock is NAS.OL and the date is 3rd of August 2016 where the data is missing. Any ideas why this data is missing ?</p>
| 1 | 2016-10-18T14:55:52Z | 40,122,701 | <p>This probably has nothing to do with the Python bindings, and it's entirely Yahoo data. If you read through the bindings, the command that runs is essentially</p>
<pre><code>curl -G 'https://query.yahooapis.com/v1/public/yql' \
--data-urlencode 'env=store://datatables.org/alltableswithkeys' \
--data-urlencode 'format=json' \
--data-urlencode 'q=select * from yahoo.finance.historicaldata where symbol = "GJF.OL" and startDate = "2016-10-13" and endDate = "2016-10-18"'
</code></pre>
<p>When I rant it for myself, I got data for 13th, 14th, 17th, and 18th. The table you're accessing is called <em>historical</em> data, so it's not unreasonable for there to be a 24-hour for the data to show up.</p>
<p>If you see a discrepancy between that endpoint and the Python bindings, you might be onto something, but the <a href="https://github.com/lukaszbanasiak/yahoo-finance/blob/master/yahoo_finance/__init__.py" rel="nofollow">code</a> for those bindings is pretty straightforward, and it seems to just pass the date range along.</p>
| 0 | 2016-10-19T05:16:31Z | [
"python",
"api",
"yahoo-finance"
] |
Problems while transforming a python dict to a list of triples? | 40,111,624 | <p>I have the following python dict:</p>
<pre><code>{'token_list': [{'quote_level': '0', 'affected_by_negation': 'no', 'token_list': [{'quote_level': '0', 'affected_by_negation': 'no', 'token_list': [{'id': '21', 'analysis_list': [{'tag': 'GNUS3S--', 'lemma': 'Robert Downey Jr', 'original_form': 'Robert Downey Jr'}], 'form': 'Robert Downey Jr', 'type': 'phrase', 'syntactic_tree_relation_list': [{'type': 'isSubject', 'id': '17'}], 'separation': '_', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'quote_level': '0', 'token_list': [{'id': '16', 'analysis_list': [{'tag': 'NPUU-N-', 'sense_id_list': [{'sense_id': '__12123288058840445720'}], 'lemma': 'Robert Downey Jr', 'original_form': 'Robert Downey Jr'}], 'sense_list': [{'info': 'sementity/class=instance@type=Top>Person>FullName@confidence=unknown', 'form': 'Robert Downey Jr', 'id': '__12123288058840445720'}], 'form': 'Robert Downey Jr', 'type': 'multiword', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'separation': '_', 'quote_level': '0', 'topic_list': {'entity_list': [{'form': 'Robert Downey Jr', 'sementity': {'type': 'Top>Person>FullName', 'confidence': 'unknown', 'class': 'instance'}, 'id': '__12123288058840445720'}]}, 'head': '15', 'inip': '0', 'affected_by_negation': 'no', 'endp': '15'}], 'head': '16', 'inip': '0', 'affected_by_negation': 'no', 'endp': '15'}, {'id': '17', 'analysis_list': [{'tag': 'VI-S3PPA-N-N9', 'lemma': 'top', 'original_form': 'has topped'}], 'form': 'has topped', 'type': 'multiword', 'syntactic_tree_relation_list': [{'type': 'iof_isSubject', 'id': '21'}, {'type': 'iof_isDirectObject', 'id': '24'}], 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'quote_level': '0', 'head': '4', 'inip': '17', 'affected_by_negation': 'no', 'endp': '26'}, {'id': '24', 'analysis_list': [{'tag': 'GN-S3D--', 'lemma': 'list', 'original_form': "Forbes magazine's annual list"}], 'form': "Forbes magazine's annual list", 'type': 'phrase', 'syntactic_tree_relation_list': [{'type': 'isDirectObject', 'id': '17'}], 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'quote_level': '0', 'token_list': [{'id': '22', 'analysis_list': [{'tag': 'GN-S3---', 'lemma': 'magazine', 'original_form': 'Forbes magazine'}], 'form': 'Forbes magazine', 'type': 'phrase', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'separation': '1', 'quote_level': '0', 'token_list': [{'quote_level': '0', 'topic_list': {'entity_list': [{'form': 'Forbes', 'semld_list': ['sumo:LastName'], 'sementity': {'type': 'Top>Person>LastName', 'fiction': 'nonfiction', 'id': 'ODENTITY_LAST_NAME', 'class': 'instance'}, 'id': '4a3369b337'}, {'form': 'Forbes', 'semld_list': ['sumo:River'], 'sementity': {'type': 'Top>Location>GeographicalEntity>WaterForm>River', 'fiction': 'nonfiction', 'id': 'ODENTITY_RIVER', 'class': 'instance'}, 'id': '9752b8b5ee'}, {'sementity': {'type': 'Top>Product>CulturalProduct>Printing>Magazine', 'fiction': 'nonfiction', 'id': 'ODENTITY_MAGAZINE', 'class': 'instance'}, 'semgeo_list': [{'country': {'form': 'United States', 'standard_list': [{'value': 'US', 'id': 'ISO3166-1-a2'}, {'value': 'USA', 'id': 'ISO3166-1-a3'}], 'id': 'beac1b545b'}, 'continent': {'form': 'AmÄÅ rica', 'id': '33fc13e6dd'}}], 'semtheme_list': [{'type': 'Top>SocialSciences>Economy', 'id': 'ODTHEME_ECONOMY'}], 'semld_list': ['sumo:Magazine'], 'form': 'Forbes', 'id': 'db0f9829ff'}]}, 'analysis_list': [{'tag': 'NP-S-N-', 'sense_id_list': [{'sense_id': 'db0f9829ff'}], 'lemma': 'Forbes', 'original_form': 'Forbes'}, {'tag': 'NP-S-N-', 'sense_id_list': [{'sense_id': '9752b8b5ee'}], 'lemma': 'Forbes', 'original_form': 'Forbes'}, {'tag': 'NPUS-N-', 'sense_id_list': [{'sense_id': '4a3369b337'}], 'lemma': 'Forbes', 'original_form': 'Forbes'}], 'separation': '1', 'sense_list': [{'info': 'sementity/class=instance@fiction=nonfiction@id=ODENTITY_LAST_NAME@type=Top>Person>LastName\tsemld_list=sumo:LastName', 'form': 'Forbes', 'id': '4a3369b337'}, {'info': 'sementity/class=instance@fiction=nonfiction@id=ODENTITY_RIVER@type=Top>Location>GeographicalEntity>WaterForm>River\tsemld_list=sumo:River', 'form': 'Forbes', 'id': '9752b8b5ee'}, {'info': 'sementity/class=instance@fiction=nonfiction@id=ODENTITY_MAGAZINE@type=Top>Product>CulturalProduct>Printing>Magazine\tsemgeo_list/continent=AmÄÅ rica#id:33fc13e6dd@country=United States#id:beac1b545b#ISO3166-1-a2:US#ISO3166-1-a3:USA\tsemld_list=sumo:Magazine\tsemtheme_list/id=ODTHEME_ECONOMY@type=Top>SocialSciences>Economy', 'form': 'Forbes', 'id': 'db0f9829ff'}], 'inip': '28', 'form': 'Forbes', 'affected_by_negation': 'no', 'endp': '33', 'id': '6', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'quote_level': '0', 'analysis_list': [{'tag': 'NC-S-N5', 'sense_id_list': [{'sense_id': 'a0a1a5401f'}], 'lemma': 'magazine', 'original_form': 'magazine'}], 'separation': '1', 'sense_list': [{'info': 'sementity/class=class@fiction=nonfiction@id=ODENTITY_MAGAZINE@type=Top>Product>CulturalProduct>Printing>Magazine\tsemld_list=sumo:Magazine', 'form': 'magazine', 'id': 'a0a1a5401f'}], 'inip': '35', 'form': 'magazine', 'affected_by_negation': 'no', 'endp': '42', 'id': '7', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}], 'head': '7', 'inip': '28', 'affected_by_negation': 'no', 'endp': '42'}, {'quote_level': '0', 'analysis_list': [{'tag': 'WN-', 'lemma': "'s", 'original_form': "'s"}], 'separation': 'A', 'inip': '43', 'form': "'s", 'affected_by_negation': 'no', 'endp': '44', 'id': '14', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'id': '23', 'analysis_list': [{'tag': 'GN-S3---', 'lemma': 'list', 'original_form': 'annual list'}], 'form': 'annual list', 'type': 'phrase', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'separation': '1', 'quote_level': '0', 'token_list': [{'quote_level': '0', 'analysis_list': [{'tag': 'AP-N5', 'lemma': 'annual', 'original_form': 'annual'}], 'separation': '1', 'inip': '46', 'form': 'annual', 'affected_by_negation': 'no', 'endp': '51', 'id': '10', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'quote_level': '0', 'analysis_list': [{'tag': 'NC-S-N5', 'lemma': 'list', 'original_form': 'list'}], 'separation': '1', 'inip': '53', 'form': 'list', 'affected_by_negation': 'no', 'endp': '56', 'id': '11', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}], 'head': '11', 'inip': '46', 'affected_by_negation': 'no', 'endp': '56'}], 'head': '23', 'inip': '28', 'affected_by_negation': 'no', 'endp': '56'}], 'separation': '_', 'analysis_list': [{'tag': 'Z-----------', 'lemma': '*', 'original_form': "Robert Downey Jr has topped Forbes magazine's annual list"}], 'inip': '0', 'form': "Robert Downey Jr has topped Forbes magazine's annual list", 'type': 'phrase', 'endp': '56', 'id': '25', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'quote_level': '0', 'analysis_list': [{'tag': '1D--', 'lemma': '.', 'original_form': '.'}], 'separation': 'A', 'inip': '57', 'form': '.', 'affected_by_negation': 'no', 'endp': '57', 'id': '12', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}], 'separation': 'A', 'inip': '0', 'endp': '57', 'type': 'sentence', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'id': '18'}], 'status': {'credits': '1', 'remaining_credits': '39848', 'code': '0', 'msg': 'OK'}}
</code></pre>
<p>How can I extract in a new tuple all the <code>analysis_list</code> keys with them respective values?:</p>
<pre><code>((NPUU-N-, Robert Downey Jr, Robert Downey Jr),(NPUU-N-, Robert Downey Jr, Robert Downey Jr), (VI-S3PPA-N-N9, top, has topped'), (GN-S3D--, list, Forbes magazine's annual list), (GN-S3---, magazine, 'original_form': 'Forbes magazine'), (NP-S-N-, Forbes, Forbes), ..., (1D--, ., .))
</code></pre>
<p>I tried the following, with pandas:</p>
<p>In:</p>
<pre><code>df = json_normalize(data['token_list'])
data = df['token_list'].to_dict()
data=data.values()
print(data)
</code></pre>
<p>out:</p>
<pre><code>dict_values([[{'quote_level': '0', 'analysis_list': [{'tag': 'Z-----------', 'lemma': '*', 'original_form': "Robert Downey Jr has topped Forbes magazine's annual list"}], 'token_list': [{'id': '21', 'analysis_list': [{'tag': 'GNUS3S--', 'lemma': 'Robert Downey Jr', 'original_form': 'Robert Downey Jr'}], 'form': 'Robert Downey Jr', 'type': 'phrase', 'syntactic_tree_relation_list': [{'type': 'isSubject', 'id': '17'}], 'separation': '_', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'quote_level': '0', 'token_list': [{'id': '16', 'analysis_list': [{'tag': 'NPUU-N-', 'sense_id_list': [{'sense_id': '__12123288058840445720'}], 'lemma': 'Robert Downey Jr', 'original_form': 'Robert Downey Jr'}], 'sense_list': [{'info': 'sementity/class=instance@type=Top>Person>FullName@confidence=unknown', 'form': 'Robert Downey Jr', 'id': '__12123288058840445720'}], 'form': 'Robert Downey Jr', 'type': 'multiword', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'separation': '_', 'quote_level': '0', 'topic_list': {'entity_list': [{'form': 'Robert Downey Jr', 'sementity': {'type': 'Top>Person>FullName', 'confidence': 'unknown', 'class': 'instance'}, 'id': '__12123288058840445720'}]}, 'head': '15', 'inip': '0', 'affected_by_negation': 'no', 'endp': '15'}], 'head': '16', 'inip': '0', 'affected_by_negation': 'no', 'endp': '15'}, {'id': '17', 'analysis_list': [{'tag': 'VI-S3PPA-N-N9', 'lemma': 'top', 'original_form': 'has topped'}], 'form': 'has topped', 'type': 'multiword', 'syntactic_tree_relation_list': [{'type': 'iof_isSubject', 'id': '21'}, {'type': 'iof_isDirectObject', 'id': '24'}], 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'quote_level': '0', 'head': '4', 'inip': '17', 'affected_by_negation': 'no', 'endp': '26'}, {'id': '24', 'analysis_list': [{'tag': 'GN-S3D--', 'lemma': 'list', 'original_form': "Forbes magazine's annual list"}], 'form': "Forbes magazine's annual list", 'type': 'phrase', 'syntactic_tree_relation_list': [{'type': 'isDirectObject', 'id': '17'}], 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'quote_level': '0', 'token_list': [{'id': '22', 'analysis_list': [{'tag': 'GN-S3---', 'lemma': 'magazine', 'original_form': 'Forbes magazine'}], 'form': 'Forbes magazine', 'type': 'phrase', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'separation': '1', 'quote_level': '0', 'token_list': [{'quote_level': '0', 'topic_list': {'entity_list': [{'form': 'Forbes', 'semld_list': ['sumo:LastName'], 'sementity': {'type': 'Top>Person>LastName', 'fiction': 'nonfiction', 'id': 'ODENTITY_LAST_NAME', 'class': 'instance'}, 'id': '4a3369b337'}, {'form': 'Forbes', 'semld_list': ['sumo:River'], 'sementity': {'type': 'Top>Location>GeographicalEntity>WaterForm>River', 'fiction': 'nonfiction', 'id': 'ODENTITY_RIVER', 'class': 'instance'}, 'id': '9752b8b5ee'}, {'sementity': {'type': 'Top>Product>CulturalProduct>Printing>Magazine', 'fiction': 'nonfiction', 'id': 'ODENTITY_MAGAZINE', 'class': 'instance'}, 'id': 'db0f9829ff', 'semgeo_list': [{'country': {'form': 'United States', 'standard_list': [{'value': 'US', 'id': 'ISO3166-1-a2'}, {'value': 'USA', 'id': 'ISO3166-1-a3'}], 'id': 'beac1b545b'}, 'continent': {'form': 'AmÄÅ rica', 'id': '33fc13e6dd'}}], 'semld_list': ['sumo:Magazine'], 'semtheme_list': [{'type': 'Top>SocialSciences>Economy', 'id': 'ODTHEME_ECONOMY'}], 'form': 'Forbes'}]}, 'analysis_list': [{'tag': 'NP-S-N-', 'sense_id_list': [{'sense_id': 'db0f9829ff'}], 'lemma': 'Forbes', 'original_form': 'Forbes'}, {'tag': 'NP-S-N-', 'sense_id_list': [{'sense_id': '9752b8b5ee'}], 'lemma': 'Forbes', 'original_form': 'Forbes'}, {'tag': 'NPUS-N-', 'sense_id_list': [{'sense_id': '4a3369b337'}], 'lemma': 'Forbes', 'original_form': 'Forbes'}], 'id': '6', 'sense_list': [{'info': 'sementity/class=instance@fiction=nonfiction@id=ODENTITY_LAST_NAME@type=Top>Person>LastName\tsemld_list=sumo:LastName', 'form': 'Forbes', 'id': '4a3369b337'}, {'info': 'sementity/class=instance@fiction=nonfiction@id=ODENTITY_RIVER@type=Top>Location>GeographicalEntity>WaterForm>River\tsemld_list=sumo:River', 'form': 'Forbes', 'id': '9752b8b5ee'}, {'info': 'sementity/class=instance@fiction=nonfiction@id=ODENTITY_MAGAZINE@type=Top>Product>CulturalProduct>Printing>Magazine\tsemgeo_list/continent=AmÄÅ rica#id:33fc13e6dd@country=United States#id:beac1b545b#ISO3166-1-a2:US#ISO3166-1-a3:USA\tsemld_list=sumo:Magazine\tsemtheme_list/id=ODTHEME_ECONOMY@type=Top>SocialSciences>Economy', 'form': 'Forbes', 'id': 'db0f9829ff'}], 'inip': '28', 'form': 'Forbes', 'affected_by_negation': 'no', 'endp': '33', 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'quote_level': '0', 'analysis_list': [{'tag': 'NC-S-N5', 'sense_id_list': [{'sense_id': 'a0a1a5401f'}], 'lemma': 'magazine', 'original_form': 'magazine'}], 'id': '7', 'sense_list': [{'info': 'sementity/class=class@fiction=nonfiction@id=ODENTITY_MAGAZINE@type=Top>Product>CulturalProduct>Printing>Magazine\tsemld_list=sumo:Magazine', 'form': 'magazine', 'id': 'a0a1a5401f'}], 'inip': '35', 'form': 'magazine', 'affected_by_negation': 'no', 'endp': '42', 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}], 'head': '7', 'inip': '28', 'affected_by_negation': 'no', 'endp': '42'}, {'quote_level': '0', 'analysis_list': [{'tag': 'WN-', 'lemma': "'s", 'original_form': "'s"}], 'id': '14', 'inip': '43', 'form': "'s", 'affected_by_negation': 'no', 'endp': '44', 'separation': 'A', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'id': '23', 'analysis_list': [{'tag': 'GN-S3---', 'lemma': 'list', 'original_form': 'annual list'}], 'form': 'annual list', 'type': 'phrase', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}, 'separation': '1', 'quote_level': '0', 'token_list': [{'quote_level': '0', 'analysis_list': [{'tag': 'AP-N5', 'lemma': 'annual', 'original_form': 'annual'}], 'id': '10', 'inip': '46', 'form': 'annual', 'affected_by_negation': 'no', 'endp': '51', 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'quote_level': '0', 'analysis_list': [{'tag': 'NC-S-N5', 'lemma': 'list', 'original_form': 'list'}], 'id': '11', 'inip': '53', 'form': 'list', 'affected_by_negation': 'no', 'endp': '56', 'separation': '1', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}], 'head': '11', 'inip': '46', 'affected_by_negation': 'no', 'endp': '56'}], 'head': '23', 'inip': '28', 'affected_by_negation': 'no', 'endp': '56'}], 'id': '25', 'type': 'phrase', 'inip': '0', 'form': "Robert Downey Jr has topped Forbes magazine's annual list", 'affected_by_negation': 'no', 'endp': '56', 'separation': '_', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}, {'quote_level': '0', 'analysis_list': [{'tag': '1D--', 'lemma': '.', 'original_form': '.'}], 'id': '12', 'inip': '57', 'form': '.', 'affected_by_negation': 'no', 'endp': '57', 'separation': 'A', 'style': {'isBold': 'no', 'isTitle': 'no', 'isItalics': 'no', 'isUnderlined': 'no'}}]])
</code></pre>
<p>Additionally I tried:</p>
<pre><code>myvalues = [i['analysis_list'] for i in data if 'analysis_list' in i]
print(myvalues)
</code></pre>
<p>However, I am getting confused with so much keys and values, which is the recommended way to generate tuples from this dictionary?. I was thinking on using pandas or another alternative approach...</p>
| 0 | 2016-10-18T14:56:06Z | 40,114,072 | <p>You could use this code:</p>
<pre><code>def gettuples(data, level = 0):
if isinstance(data, dict):
if 'analysis_list' in data:
yield data['analysis_list'][0]
for val in data.values():
yield from gettuples(val)
elif isinstance(data, list):
for val in data:
yield from gettuples(val)
result = [[obj['lemma'], obj['original_form'], obj['tag']] for obj in gettuples(data)]
print(result)
</code></pre>
<p>See it run on <a href="https://repl.it/Dz1g/1" rel="nofollow">repl.it</a></p>
| 1 | 2016-10-18T17:01:08Z | [
"python",
"json",
"parsing",
"pandas",
"dictionary"
] |
Extract values from pandas stream | 40,111,647 | <p>I have very weird data coming via curl into my pandas dataframe. What I would like to do is extract values out of the column as described below. Can someone guide me how to extract the info?</p>
<pre><code>cc = pd.read_csv(cc_curl)
print(cc['srv_id'])
srv_id
------
TicketID 14593_ServiceID 104731
ServiceID
TicketID 14595_ServiceID 104732
TicketID 14609_ServiceID 0
TicketID 0_ServiceID 178282
</code></pre>
<ol>
<li>Extract 5 digit ticket id and 6 digit service id.</li>
<li>Extract nothing since there is no ticketID and service ID is blank.</li>
<li>Extract 5 digit ticket id and 6 digit service id.</li>
<li>Extract 5 digit ticket id only and service id should be blank since it is 0.</li>
<li>Extract 6 digit service id only and leave ticket ID blank since it is 0.</li>
</ol>
<p>Desired output</p>
<pre><code>srv_id
------
14593 104731
14595 104732
14609
178282
</code></pre>
| 1 | 2016-10-18T14:57:08Z | 40,112,793 | <p>If you want to extract this information into two new columns, you can do it this way:</p>
<pre><code>import numpy as np
import pandas as pd
In [22]: df[['TicketID','ServiceID']] = (
...: df.srv_id.str.extract(r'TicketID\s+(\d+).*?ServiceID\s+(\d+)', expand=True)
...: .replace(r'\b0\b', np.nan, regex=True)
...: )
...:
In [23]: df
Out[23]:
srv_id TicketID ServiceID
0 TicketID 14593_ServiceID 104731 14593 104731
1 ServiceID NaN NaN
2 TicketID 14595_ServiceID 104732 14595 104732
3 TicketID 14609_ServiceID 0 14609 NaN
4 TicketID 0_ServiceID 178282 NaN 178282
</code></pre>
<p>If you want to replace your string with extracted numbers:</p>
<pre><code>In [161]: df['new_srv_id'] = \
df.srv_id.replace([r'[^\d{5,}]+', r'\s*\b0\b\s*'], [' ', ''], regex=True)
In [162]: df
Out[162]:
srv_id new_srv_id
0 TicketID 14593_ServiceID 104731 14593 104731
1 ServiceID
2 TicketID 14595_ServiceID 104732 14595 104732
3 TicketID 14609_ServiceID 0 14609
4 TicketID 0_ServiceID 178282 178282
</code></pre>
| 2 | 2016-10-18T15:52:25Z | [
"python",
"python-3.x",
"pandas",
"dataframe"
] |
IndexError: list index out of range in Python 3 | 40,111,680 | <p>I am newbie to Python. I got the index error when I run the code. I have seen the relevant questions in Stackoverflow, but I still can't see what the bug is. I am very appreciated for any response. Thank you. Here is the code:</p>
<pre><code> class Card:
def __init__(self, suit = 0, rank = 2):
self.suit = suit
self.rank = rank
suit_names = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
rank_names =[None,'Ace','2','3','4','5','6','7','9','10','Jack','Queen', 'King']
def __str__ (self):
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
def __lt__(self,other):
t1 = self.suit, self.rank
t2 = other.suit, other.rank
return t1 < t2
class Deck:
def __init__(self):
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
res = [ ]
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
deck1 = Deck()
print(deck1)
</code></pre>
<p>Then I got the following error:</p>
<pre><code> Traceback (most recent call last):
File "/Users/Enze/Python/untitled/Inheritance.py", line 35, in <module>
print(deck1)
File "/Users/Enze/Python/untitled/Inheritance.py", line 30, in __str__
res.append(str(card))
File "/Users/Enze/Python/untitled/Inheritance.py", line 11, in __str__
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
IndexError: list index out of range
</code></pre>
| 0 | 2016-10-18T14:58:59Z | 40,111,742 | <p>You have <code>13</code> items inside your <code>rank_names</code> list so the index of last element is of value <code>12</code> (lists are enumerated starting with <code>0</code>) - inside loop</p>
<pre><code>for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
</code></pre>
<p>the maximum index you try to get is <code>13</code> (<code>range</code> is generating numbers one-by-one in order excluding borders) and that's why you've got <code>index out of range</code> exception</p>
| 3 | 2016-10-18T15:01:45Z | [
"python"
] |
IndexError: list index out of range in Python 3 | 40,111,680 | <p>I am newbie to Python. I got the index error when I run the code. I have seen the relevant questions in Stackoverflow, but I still can't see what the bug is. I am very appreciated for any response. Thank you. Here is the code:</p>
<pre><code> class Card:
def __init__(self, suit = 0, rank = 2):
self.suit = suit
self.rank = rank
suit_names = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
rank_names =[None,'Ace','2','3','4','5','6','7','9','10','Jack','Queen', 'King']
def __str__ (self):
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
def __lt__(self,other):
t1 = self.suit, self.rank
t2 = other.suit, other.rank
return t1 < t2
class Deck:
def __init__(self):
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
res = [ ]
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
deck1 = Deck()
print(deck1)
</code></pre>
<p>Then I got the following error:</p>
<pre><code> Traceback (most recent call last):
File "/Users/Enze/Python/untitled/Inheritance.py", line 35, in <module>
print(deck1)
File "/Users/Enze/Python/untitled/Inheritance.py", line 30, in __str__
res.append(str(card))
File "/Users/Enze/Python/untitled/Inheritance.py", line 11, in __str__
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
IndexError: list index out of range
</code></pre>
| 0 | 2016-10-18T14:58:59Z | 40,111,746 | <pre><code>class Card:
def __init__(self, suit = 0, rank = 2):
self.suit = suit
self.rank = rank
suit_names = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
rank_names =[None,'Ace','2','3','4','5','6','7','9','10','Jack','Queen', 'King']
def __str__ (self):
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
def __lt__(self,other):
t1 = self.suit, self.rank
t2 = other.suit, other.rank
return t1 < t2
class Deck:
def __init__(self):
self.cards = []
for suit in range(4):
for rank in range(1, 13): #error here.
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
res = [ ]
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
deck1 = Deck()
print(deck1)
</code></pre>
<p>You had 13 cards but you tried to find 14 cards causing an out of index issue. </p>
<p>Take this example:</p>
<pre><code>for i in range(1,14):
print(i)
</code></pre>
<p>It prints 1 to 13 including 13. But in your list index starts at 0 so 13 items would give you 0-12 slots including 0.</p>
| 2 | 2016-10-18T15:01:51Z | [
"python"
] |
IndexError: list index out of range in Python 3 | 40,111,680 | <p>I am newbie to Python. I got the index error when I run the code. I have seen the relevant questions in Stackoverflow, but I still can't see what the bug is. I am very appreciated for any response. Thank you. Here is the code:</p>
<pre><code> class Card:
def __init__(self, suit = 0, rank = 2):
self.suit = suit
self.rank = rank
suit_names = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
rank_names =[None,'Ace','2','3','4','5','6','7','9','10','Jack','Queen', 'King']
def __str__ (self):
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
def __lt__(self,other):
t1 = self.suit, self.rank
t2 = other.suit, other.rank
return t1 < t2
class Deck:
def __init__(self):
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
res = [ ]
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
deck1 = Deck()
print(deck1)
</code></pre>
<p>Then I got the following error:</p>
<pre><code> Traceback (most recent call last):
File "/Users/Enze/Python/untitled/Inheritance.py", line 35, in <module>
print(deck1)
File "/Users/Enze/Python/untitled/Inheritance.py", line 30, in __str__
res.append(str(card))
File "/Users/Enze/Python/untitled/Inheritance.py", line 11, in __str__
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
IndexError: list index out of range
</code></pre>
| 0 | 2016-10-18T14:58:59Z | 40,111,886 | <p>use are missing '8' in rank_names</p>
<pre><code>class Card:
def __init__(self, suit = 0, rank = 2):
self.suit = suit
self.rank = rank
suit_names = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
rank_names =[None,'Ace','2','3','4','5','6','7','8','9','10','Jack','Queen', 'King']
def __str__ (self):
return '%s of %s' % (Card.rank_names[self.rank], Card.suit_names[self.suit])
def __lt__(self,other):
t1 = self.suit, self.rank
t2 = other.suit, other.rank
return t1 < t2
class Deck:
def __init__(self):
self.cards = []
for suit in range(4):
for rank in range(1, 14):
card = Card(suit, rank)
self.cards.append(card)
def __str__(self):
res = [ ]
for card in self.cards:
res.append(str(card))
return '\n'.join(res)
deck1 = Deck()
print(deck1)
</code></pre>
| 0 | 2016-10-18T15:08:50Z | [
"python"
] |
How to use a dict to subset a DataFrame? | 40,111,730 | <p>Say, I have given a DataFrame with most of the columns being categorical data.</p>
<pre><code>> data.head()
age risk sex smoking
0 28 no male no
1 58 no female no
2 27 no male yes
3 26 no male no
4 29 yes female yes
</code></pre>
<p>And I would like to subset this data by a dict of key-value pairs for those categorical variables.</p>
<pre><code>tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
<p>Hence, I would like to have the following subset.</p>
<pre><code>data[ (data.risk == 'no') & (data.smoking == 'yes') & (data.sex == 'female')]
</code></pre>
<p>What I want to do is:</p>
<pre><code>data[tmp]
</code></pre>
<p>What is the most python / pandas way of doing this?</p>
<hr>
<p>Minimal example:</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
x = Series(random.randint(0,2,50), dtype='category')
x.cat.categories = ['no', 'yes']
y = Series(random.randint(0,2,50), dtype='category')
y.cat.categories = ['no', 'yes']
z = Series(random.randint(0,2,50), dtype='category')
z.cat.categories = ['male', 'female']
a = Series(random.randint(20,60,50), dtype='category')
data = DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
| 7 | 2016-10-18T15:01:30Z | 40,112,030 | <p>You could build a boolean vector that checks those attributes. Probably a better way though: </p>
<pre><code>df[risk == 'no' and smoking == 'yes' and sex == 'female' for (age, risk, sex, smoking) in df.itertuples()]
</code></pre>
| 2 | 2016-10-18T15:17:10Z | [
"python",
"pandas",
"dataframe",
"categorical-data"
] |
How to use a dict to subset a DataFrame? | 40,111,730 | <p>Say, I have given a DataFrame with most of the columns being categorical data.</p>
<pre><code>> data.head()
age risk sex smoking
0 28 no male no
1 58 no female no
2 27 no male yes
3 26 no male no
4 29 yes female yes
</code></pre>
<p>And I would like to subset this data by a dict of key-value pairs for those categorical variables.</p>
<pre><code>tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
<p>Hence, I would like to have the following subset.</p>
<pre><code>data[ (data.risk == 'no') & (data.smoking == 'yes') & (data.sex == 'female')]
</code></pre>
<p>What I want to do is:</p>
<pre><code>data[tmp]
</code></pre>
<p>What is the most python / pandas way of doing this?</p>
<hr>
<p>Minimal example:</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
x = Series(random.randint(0,2,50), dtype='category')
x.cat.categories = ['no', 'yes']
y = Series(random.randint(0,2,50), dtype='category')
y.cat.categories = ['no', 'yes']
z = Series(random.randint(0,2,50), dtype='category')
z.cat.categories = ['male', 'female']
a = Series(random.randint(20,60,50), dtype='category')
data = DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
| 7 | 2016-10-18T15:01:30Z | 40,112,316 | <p>You can create a look up data frame from the dictionary and then do an inner join with the <code>data</code> which will have the same effect as <code>query</code>:</p>
<pre><code>from pandas import merge, DataFrame
merge(DataFrame(tmp, index =[0]), data)
</code></pre>
<p><a href="https://i.stack.imgur.com/xW4kf.png" rel="nofollow"><img src="https://i.stack.imgur.com/xW4kf.png" alt="enter image description here"></a></p>
| 3 | 2016-10-18T15:31:04Z | [
"python",
"pandas",
"dataframe",
"categorical-data"
] |
How to use a dict to subset a DataFrame? | 40,111,730 | <p>Say, I have given a DataFrame with most of the columns being categorical data.</p>
<pre><code>> data.head()
age risk sex smoking
0 28 no male no
1 58 no female no
2 27 no male yes
3 26 no male no
4 29 yes female yes
</code></pre>
<p>And I would like to subset this data by a dict of key-value pairs for those categorical variables.</p>
<pre><code>tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
<p>Hence, I would like to have the following subset.</p>
<pre><code>data[ (data.risk == 'no') & (data.smoking == 'yes') & (data.sex == 'female')]
</code></pre>
<p>What I want to do is:</p>
<pre><code>data[tmp]
</code></pre>
<p>What is the most python / pandas way of doing this?</p>
<hr>
<p>Minimal example:</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
x = Series(random.randint(0,2,50), dtype='category')
x.cat.categories = ['no', 'yes']
y = Series(random.randint(0,2,50), dtype='category')
y.cat.categories = ['no', 'yes']
z = Series(random.randint(0,2,50), dtype='category')
z.cat.categories = ['male', 'female']
a = Series(random.randint(20,60,50), dtype='category')
data = DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
| 7 | 2016-10-18T15:01:30Z | 40,112,387 | <p>I would use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#the-query-method-experimental" rel="nofollow">.query()</a> method for this task:</p>
<pre><code>In [103]: qry = ' and '.join(["{} == '{}'".format(k,v) for k,v in tmp.items()])
In [104]: qry
Out[104]: "sex == 'female' and risk == 'no' and smoking == 'yes'"
In [105]: data.query(qry)
Out[105]:
age risk sex smoking
7 24 no female yes
22 43 no female yes
23 42 no female yes
25 24 no female yes
32 29 no female yes
40 34 no female yes
43 35 no female yes
</code></pre>
| 3 | 2016-10-18T15:33:25Z | [
"python",
"pandas",
"dataframe",
"categorical-data"
] |
How to use a dict to subset a DataFrame? | 40,111,730 | <p>Say, I have given a DataFrame with most of the columns being categorical data.</p>
<pre><code>> data.head()
age risk sex smoking
0 28 no male no
1 58 no female no
2 27 no male yes
3 26 no male no
4 29 yes female yes
</code></pre>
<p>And I would like to subset this data by a dict of key-value pairs for those categorical variables.</p>
<pre><code>tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
<p>Hence, I would like to have the following subset.</p>
<pre><code>data[ (data.risk == 'no') & (data.smoking == 'yes') & (data.sex == 'female')]
</code></pre>
<p>What I want to do is:</p>
<pre><code>data[tmp]
</code></pre>
<p>What is the most python / pandas way of doing this?</p>
<hr>
<p>Minimal example:</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
x = Series(random.randint(0,2,50), dtype='category')
x.cat.categories = ['no', 'yes']
y = Series(random.randint(0,2,50), dtype='category')
y.cat.categories = ['no', 'yes']
z = Series(random.randint(0,2,50), dtype='category')
z.cat.categories = ['male', 'female']
a = Series(random.randint(20,60,50), dtype='category')
data = DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
| 7 | 2016-10-18T15:01:30Z | 40,125,748 | <p>You can use list comprehension with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow"><code>all</code></a>:</p>
<pre><code>import numpy as np
import pandas as pd
np.random.seed(123)
x = pd.Series(np.random.randint(0,2,10), dtype='category')
x.cat.categories = ['no', 'yes']
y = pd.Series(np.random.randint(0,2,10), dtype='category')
y.cat.categories = ['no', 'yes']
z = pd.Series(np.random.randint(0,2,10), dtype='category')
z.cat.categories = ['male', 'female']
a = pd.Series(np.random.randint(20,60,10), dtype='category')
data = pd.DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
print (data)
age risk sex smoking
0 24 no male yes
1 23 yes male yes
2 22 no female no
3 40 no female yes
4 59 no female no
5 22 no male yes
6 40 no female no
7 27 yes male yes
8 55 yes male yes
9 48 no male no
</code></pre>
<pre><code>tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
mask = pd.concat([data[x[0]].eq(x[1]) for x in tmp.items()], axis=1).all(axis=1)
print (mask)
0 False
1 False
2 False
3 True
4 False
5 False
6 False
7 False
8 False
9 False
dtype: bool
df1 = data[mask]
print (df1)
age risk sex smoking
3 40 no female yes
</code></pre>
<pre><code>L = [(x[0], x[1]) for x in tmp.items()]
print (L)
[('smoking', 'yes'), ('sex', 'female'), ('risk', 'no')]
L = pd.concat([data[x[0]].eq(x[1]) for x in tmp.items()], axis=1)
print (L)
smoking sex risk
0 True False True
1 True False False
2 False True True
3 True True True
4 False True True
5 True False True
6 False True True
7 True False False
8 True False False
9 False False True
</code></pre>
<p><strong>Timings</strong>: </p>
<p><code>len(data)=1M</code>. </p>
<pre><code>N = 1000000
np.random.seed(123)
x = pd.Series(np.random.randint(0,2,N), dtype='category')
x.cat.categories = ['no', 'yes']
y = pd.Series(np.random.randint(0,2,N), dtype='category')
y.cat.categories = ['no', 'yes']
z = pd.Series(np.random.randint(0,2,N), dtype='category')
z.cat.categories = ['male', 'female']
a = pd.Series(np.random.randint(20,60,N), dtype='category')
data = pd.DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
#[1000000 rows x 4 columns]
print (data)
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
In [133]: %timeit (data[pd.concat([data[x[0]].eq(x[1]) for x in tmp.items()], axis=1).all(axis=1)])
10 loops, best of 3: 89.1 ms per loop
In [134]: %timeit (data.query(' and '.join(["{} == '{}'".format(k,v) for k,v in tmp.items()])))
1 loop, best of 3: 237 ms per loop
In [135]: %timeit (pd.merge(pd.DataFrame(tmp, index =[0]), data.reset_index()).set_index('index'))
1 loop, best of 3: 256 ms per loop
</code></pre>
| 2 | 2016-10-19T08:15:30Z | [
"python",
"pandas",
"dataframe",
"categorical-data"
] |
How to use a dict to subset a DataFrame? | 40,111,730 | <p>Say, I have given a DataFrame with most of the columns being categorical data.</p>
<pre><code>> data.head()
age risk sex smoking
0 28 no male no
1 58 no female no
2 27 no male yes
3 26 no male no
4 29 yes female yes
</code></pre>
<p>And I would like to subset this data by a dict of key-value pairs for those categorical variables.</p>
<pre><code>tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
<p>Hence, I would like to have the following subset.</p>
<pre><code>data[ (data.risk == 'no') & (data.smoking == 'yes') & (data.sex == 'female')]
</code></pre>
<p>What I want to do is:</p>
<pre><code>data[tmp]
</code></pre>
<p>What is the most python / pandas way of doing this?</p>
<hr>
<p>Minimal example:</p>
<pre><code>import numpy as np
import pandas as pd
from pandas import Series, DataFrame
x = Series(random.randint(0,2,50), dtype='category')
x.cat.categories = ['no', 'yes']
y = Series(random.randint(0,2,50), dtype='category')
y.cat.categories = ['no', 'yes']
z = Series(random.randint(0,2,50), dtype='category')
z.cat.categories = ['male', 'female']
a = Series(random.randint(20,60,50), dtype='category')
data = DataFrame({'risk':x, 'smoking':y, 'sex':z, 'age':a})
tmp = {'risk':'no', 'smoking':'yes', 'sex':'female'}
</code></pre>
| 7 | 2016-10-18T15:01:30Z | 40,126,366 | <p>I think you can could use the <code>to_dict</code> method on your dataframe, and then filter using a list comprehension:</p>
<pre class="lang-python prettyprint-override"><code>df = pd.DataFrame(data={'age':[28, 29], 'sex':["M", "F"], 'smoking':['y', 'n']})
print df
tmp = {'age': 28, 'smoking': 'y', 'sex': 'M'}
print pd.DataFrame([i for i in df.to_dict('records') if i == tmp])
>>> age sex smoking
0 28 M y
1 29 F n
age sex smoking
0 28 M y
</code></pre>
<p>You could also convert tmp to a series:</p>
<pre><code>ts = pd.Series(tmp)
print pd.DataFrame([i[1] for i in df.iterrows() if i[1].equals(ts)])
</code></pre>
| 0 | 2016-10-19T08:45:41Z | [
"python",
"pandas",
"dataframe",
"categorical-data"
] |
matplotlib image shows in black and white, but I wanted gray | 40,111,822 | <p>I have a small code sample to plot images in matplotlib, and the image is shown as this :</p>
<p><a href="https://i.stack.imgur.com/r4CPR.png" rel="nofollow"><img src="https://i.stack.imgur.com/r4CPR.png" alt="enter image description here"></a></p>
<p>Notice the image in the black box has black background, while my desired output is this :</p>
<p><a href="https://i.stack.imgur.com/0jNcL.png" rel="nofollow"><img src="https://i.stack.imgur.com/0jNcL.png" alt="enter image description here"></a></p>
<p>My code to plot the image is this :</p>
<pre><code>plt.subplot(111)
plt.imshow(np.abs(img), cmap = 'gray')
plt.title('Level 0'), plt.xticks([]), plt.yticks([])
plt.show()
</code></pre>
<p>My understanding is that <code>cmap=grey</code> should display it in grayscale. Below is a snippet of the matrix <code>img</code> being plotted :</p>
<pre><code>[[ 192.77504036 +1.21392817e-11j 151.92357434 +1.21278246e-11j
140.67585733 +6.71014111e-12j 167.76903747 +2.92050743e-12j
147.59664180 +2.33718944e-12j 98.27986577 +3.56896094e-12j
96.16252035 +5.31530804e-12j 112.39194666 +5.86689097e-12j....
</code></pre>
<p>What am I missing here ?</p>
| 1 | 2016-10-18T15:05:28Z | 40,125,210 | <p>The problem seems to be that you have three channels while there should be only one, and that the data should be normalized between <code>[0, 1]</code>. I get a proper looking gray scaled image using this:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
img = mpimg.imread('Lenna.png')
# The formula below can be changed -- the point is that you go from 3 values to 1
imgplot = plt.imshow(np.dot(img[...,:3], [0.33, 0.33, 0.33]), cmap='gray')
plt.show()
</code></pre>
<p>This gives me:</p>
<p><a href="https://i.stack.imgur.com/PX6O5.png" rel="nofollow"><img src="https://i.stack.imgur.com/PX6O5.png" alt="enter image description here"></a></p>
<p>Also, a snapshot of the data:</p>
<pre><code>[[ 0.63152942 0.63152942 0.63800002 ..., 0.64705883 0.59658825 0.50341177]
[ 0.63152942 0.63152942 0.63800002 ..., 0.64705883 0.59658825 0.50341177]
[ 0.63152942 0.63152942 0.63800002 ..., 0.64705883 0.59658825 0.50341177]
...]
</code></pre>
| 0 | 2016-10-19T07:47:43Z | [
"python",
"matplotlib"
] |
Construct pandas dataframe from a .fits file | 40,111,872 | <p>I have a .fits file that contains data. </p>
<p>I would like to construct a pandas dataframe from this particular file but I don't know how to do it. </p>
<pre><code>data = fits.open('datafile')
data.info
</code></pre>
<p>gives: </p>
<pre><code>No. Name Type Cards Dimensions Format
0 PRIMARY PrimaryHDU 6 (12, 250000) float64
</code></pre>
<p>and: </p>
<pre><code>data[0].data.shape
</code></pre>
<p>gives:</p>
<pre><code>(250000, 12)
</code></pre>
| 0 | 2016-10-18T15:07:48Z | 40,112,001 | <p>According to what you have in your question and the astropy docs (<a href="http://docs.astropy.org/en/stable/io/fits/" rel="nofollow">http://docs.astropy.org/en/stable/io/fits/</a>), it looks like you just need to do: </p>
<pre><code>from astropy.io import fits
import pandas
with fits.open('datafile') as data:
df = pandas.DataFrame(data[0].data)
</code></pre>
<p>Edit:
I don't have much experience we astropy, but other have mentioned that you can read the fits files into a <code>Table</code> object, which has a <code>to_pandas()</code> method:</p>
<pre><code>from astropy.table import Table
dat = Table.read('datafile', format='fits')
df = dat.to_pandas()
</code></pre>
<p>Might be worth investigating.</p>
<p><a href="http://docs.astropy.org/en/latest/table/pandas.html" rel="nofollow">http://docs.astropy.org/en/latest/table/pandas.html</a></p>
| 4 | 2016-10-18T15:15:52Z | [
"python",
"pandas",
"dataframe",
"astropy"
] |
Why won't values between 100 and 1000 work when playing Four is Magic, excluding values by 100 | 40,112,003 | <p>Everything works besides values between 100 and 999, except values divisible by 100 work.</p>
<p>The game is as follows:</p>
<p>Four is magic. Write a Python program (called q3.py) that given an integer from 0 to 1000 does the â4 is magicâ transformation. The steps are as follows:</p>
<ol>
<li>Convert the integer n into English and count the number of letters (i.e. 21 is âtwenty oneâ and consists of 9 letters, 102 is âone hundred twoâ and consists of 13 letters, 1000 is âone thousandâ and consists of 11 letters).</li>
<li><p>Let <code>nlen</code> be the length of the English word equivalent for the integer n.</p>
<p>a. If <code>nlen</code> is 4, output âfour is magic.â Then, terminate the transformation process.</p>
<p>b. Otherwise output â is <code>nlen</code>.â Repeat
step (a), where the integer n is set to <code>nlen</code>.</p></li>
</ol>
<p>Suppose the user inputs the integer 26. Then, the transformation proceeds as follows.</p>
<ol>
<li>26 is 9. , where twenty six is the 9-letter English word equivalent of 26.</li>
<li>9 is 4. , where nine is the 4-letter English word equivalent of 9.</li>
<li>4 is magic.</li>
</ol>
<p> </p>
<pre><code>def convert(number_str):
# Enter your code here.
count = 0
index = 0
x = ['zero','one','two','three','four','five','six','seven','eight','nine','ten','eleven','twelve','thirteen','fourteen','fifteen','sixteen','seventeen','eighteen','nineteen']
y = ['zero','ten','twenty','thirty','forty','fifty','sixty','seventy','eighty','ninety']
while (number_str != '4'):
if 0 <= int(number_str) <= 19:
a = len(x[int(number_str)])
print(x[int(number_str)],'is',a)
number_str = str(a)
elif 20 <= int(number_str) <= 99:
if number_str[1] == "0":
a = len(y[int(number_str[0])])
print(y[int(number_str[0])],'is',a)
number_str = str(a)
else:
a = len(y[int(number_str[0])]) + len(x[int(number_str[1])])
print(y[int(number_str[0])] + ' ' + x[int(number_str[1])],'is',a)
number_str = a
elif 100 <= int(number_str) <= 999:
rem = int(number_str) % 100
div = int(number_str) // 100
if rem == 0:
a = len(x[div]) + 7
print(x[div] + ' hundred is',a)
number_str = str(a)
else:
if (number_str[1] == '0'):
a = len(x[div]) + 7 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
elif (number_str[1] != '0'):
a = len(x[div]) + 6 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
elif number_str == '1000':
a = 11
print('one thousand is '+ str(a))
number_str = str(a)
return 'four is magic'
def main():
''' The program driver. '''
user_input = input('> ')
while user_input != 'quit':
print(convert(user_input))
user_input = input('> ')
main()
</code></pre>
<p>My question is what is wrong with this area:</p>
<pre><code>else:
if (number_str[1] == '0'):
a = len(x[div]) + 7 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
elif (number_str[1] != '0'):
a = len(x[div]) + 6 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
</code></pre>
| -3 | 2016-10-18T15:16:00Z | 40,112,409 | <p><code>convert</code> always returns <code>"four is magic"</code> It looks like you want it to return some value based on its input. </p>
| 0 | 2016-10-18T15:34:22Z | [
"python",
"python-3.x",
"simulation"
] |
Why won't values between 100 and 1000 work when playing Four is Magic, excluding values by 100 | 40,112,003 | <p>Everything works besides values between 100 and 999, except values divisible by 100 work.</p>
<p>The game is as follows:</p>
<p>Four is magic. Write a Python program (called q3.py) that given an integer from 0 to 1000 does the â4 is magicâ transformation. The steps are as follows:</p>
<ol>
<li>Convert the integer n into English and count the number of letters (i.e. 21 is âtwenty oneâ and consists of 9 letters, 102 is âone hundred twoâ and consists of 13 letters, 1000 is âone thousandâ and consists of 11 letters).</li>
<li><p>Let <code>nlen</code> be the length of the English word equivalent for the integer n.</p>
<p>a. If <code>nlen</code> is 4, output âfour is magic.â Then, terminate the transformation process.</p>
<p>b. Otherwise output â is <code>nlen</code>.â Repeat
step (a), where the integer n is set to <code>nlen</code>.</p></li>
</ol>
<p>Suppose the user inputs the integer 26. Then, the transformation proceeds as follows.</p>
<ol>
<li>26 is 9. , where twenty six is the 9-letter English word equivalent of 26.</li>
<li>9 is 4. , where nine is the 4-letter English word equivalent of 9.</li>
<li>4 is magic.</li>
</ol>
<p> </p>
<pre><code>def convert(number_str):
# Enter your code here.
count = 0
index = 0
x = ['zero','one','two','three','four','five','six','seven','eight','nine','ten','eleven','twelve','thirteen','fourteen','fifteen','sixteen','seventeen','eighteen','nineteen']
y = ['zero','ten','twenty','thirty','forty','fifty','sixty','seventy','eighty','ninety']
while (number_str != '4'):
if 0 <= int(number_str) <= 19:
a = len(x[int(number_str)])
print(x[int(number_str)],'is',a)
number_str = str(a)
elif 20 <= int(number_str) <= 99:
if number_str[1] == "0":
a = len(y[int(number_str[0])])
print(y[int(number_str[0])],'is',a)
number_str = str(a)
else:
a = len(y[int(number_str[0])]) + len(x[int(number_str[1])])
print(y[int(number_str[0])] + ' ' + x[int(number_str[1])],'is',a)
number_str = a
elif 100 <= int(number_str) <= 999:
rem = int(number_str) % 100
div = int(number_str) // 100
if rem == 0:
a = len(x[div]) + 7
print(x[div] + ' hundred is',a)
number_str = str(a)
else:
if (number_str[1] == '0'):
a = len(x[div]) + 7 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
elif (number_str[1] != '0'):
a = len(x[div]) + 6 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
elif number_str == '1000':
a = 11
print('one thousand is '+ str(a))
number_str = str(a)
return 'four is magic'
def main():
''' The program driver. '''
user_input = input('> ')
while user_input != 'quit':
print(convert(user_input))
user_input = input('> ')
main()
</code></pre>
<p>My question is what is wrong with this area:</p>
<pre><code>else:
if (number_str[1] == '0'):
a = len(x[div]) + 7 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
elif (number_str[1] != '0'):
a = len(x[div]) + 6 + len(convert(str(rem)))
print(x[div] + ' hundred ' + convert(str(rem)) + ' is '+ str(a))
number_str = str(a)
</code></pre>
| -3 | 2016-10-18T15:16:00Z | 40,112,451 | <pre><code>def convert(number_str):
# Enter your code here.
count = 0
index = 0
x = ['zero','one','two','three','four','five','six','seven','eight','nine','ten','eleven','twelve','thirteen','fourteen','fifteen','sixteen','seventeen','eighteen','nineteen']
y = ['zero','ten','twenty','thirty','forty','fifty','sixty','seventy','eighty','ninety']
while (number_str != '4'):
if 0 <= int(number_str) <= 19:
a = len(x[int(number_str)])
print(x[int(number_str)],'is',a)
number_str = str(a)
elif 20 <= int(number_str) <= 99:
if number_str[1] == "0":
a = len(y[int(number_str[0])])
print(y[int(number_str[0])],'is',a)
number_str = str(a)
else:
a = len(y[int(number_str[0])]) + len(x[int(number_str[1])])
print(y[int(number_str[0])] + ' ' + x[int(number_str[1])],'is',a)
number_str = a
elif 100 <= int(number_str) <= 999:
rem = int(number_str) % 100
div = int(number_str) // 100
print(div)
if rem == 0:
a = len(x[div]) + 7
print(x[div] + ' hundred is',a)
number_str = str(a)
else:
if (number_str[1] == '0'):
a = len(x[div]) + 7 + len(str(x[rem]))
print(x[div] + ' hundred ' + str(x[rem]) + ' is '+ str(a)) # error was here
number_str = str(a)
elif (number_str[1] != '0'):
a = len(x[div]) + 6 + len(str(x[rem]))
print(x[div] + ' hundred ' + str(x[rem]) + ' is '+ str(a)) # error was here
number_str = str(a)
elif number_str == '1000':
a = 11
print('one thousand is '+ str(a))
number_str = str(a)
return 'four is magic'
def main():
''' The program driver. '''
user_input = input('> ')
while user_input != 'quit':
print(convert(user_input))
user_input = input('> ')
main()
</code></pre>
<p>Where I commented <code>#error</code> you were doing recursion which was not what you wanted ( I think). But fixed it and it should work now. When you call recursive functions (calling the same function) it execute the newest call on the stack, returning the value it came back with, which isn't how your program worked. </p>
<p>Also I noticed you weren't calling <code>x[rem]</code> like you were suppose to for a look up on the spelling, fixed that too. </p>
<p>Next time please include desired output instead of making us fish for information. </p>
| 0 | 2016-10-18T15:36:05Z | [
"python",
"python-3.x",
"simulation"
] |
calcuale ----OverflowError: long int too large to convert to float | 40,112,133 | <pre><code>en_1 = 1
n = 1
factorial = 1
invfactorial = 1
while en_1 > 1e-6 :
en = en_1 +invfactorial
n = n + 1
factorial = factorial * n
invfactorial = float(1.0/factorial)
en_1 = en
print "e = %.5f"%en
</code></pre>
<p>I want to calculate e via this code, but it cannot work. </p>
| 0 | 2016-10-18T15:22:23Z | 40,113,227 | <p><code>en_1 > 1e-6</code> will never evaluate to <code>True</code>. <code>en_1</code> just gets bigger and bigger. At some point you end up with numbers so large that Python can't handle the conversions. Instead compare to <code>invfactorial > 1e-6</code>:</p>
<pre><code>en_1 = 1
n = 1
factorial = 1
invfactorial = 1
while invfactorial > 1e-6: # changed comparison
en = en_1 +invfactorial
n = n + 1
factorial = factorial * n
invfactorial = float(1.0/factorial)
en_1 = en # don't need both en_1 and en
</code></pre>
<p>This could be made much simpler:</p>
<pre><code>e = n = fac = 1
while 1.0/fac > 1e-6:
fac *= n
e += 1.0/fac
n += 1
</code></pre>
| 1 | 2016-10-18T16:13:56Z | [
"python",
"python-2.7"
] |
Efficiently handling duplicates in a Python list | 40,112,139 | <p>I'm looking to compactly represent duplicates in a Python list / 1D numpy array. For instance, say we have </p>
<pre><code> x = np.array([1, 0, 0, 3, 3, 0])
</code></pre>
<p>this array has several duplicate elements, that can be represented with a </p>
<pre><code> group_id = np.array([0, 1, 1, 2, 2, 1])
</code></pre>
<p>so that all duplicates in a given cluster are found with <code>x[group_id==<some_id>]</code>.</p>
<p>The list of duplicate pairs can be efficiently computed with sorting, </p>
<pre><code>s_idx = np.argsort(x)
diff_idx = np.nonzero(x[s_idx[:-1]] == x[s_idx[1:]])[0]
</code></pre>
<p>where the pair <code>s_idx[diff_idx]</code> <-> <code>s_idx[diff_idx+1]</code> correspond to the indices in the original array that are duplicates.
(here <code>array([1, 2, 3])</code> <-> <code>array([2, 5, 4])</code>).</p>
<p>However, I'm not sure how to efficiently calculate <code>cluster_id</code> from this linkage information for large arrays sizes (<code>N > 10â¶</code>).</p>
<p><strong>Edit:</strong> as suggested by <strong>@Chris_Rands</strong>, this can indeed be done with <code>itertools.groupby</code>,</p>
<pre><code> import numpy as np
import itertools
def get_group_id(x):
group_id = np.zeros(x.shape, dtype='int')
for i, j in itertools.groupby(x):
j_el = next(j)
group_id[x==j_el] = i
return group_id
</code></pre>
<p>however the scaling appears to be <strong>O(n^2)</strong>, and this would not scale to my use case (<code>N > 10â¶</code>),</p>
<pre><code> for N in [50000, 100000, 200000]:
%time _ = get_group_id(np.random.randint(0, N, size=N))
CPU times: total: 1.53 s
CPU times: total: 5.83 s
CPU times: total: 23.9 s
</code></pre>
<p>and I belive using the duplicate linkage information would be more efficient as computing duplicate pairs for <code>N=200000</code> takes just 6.44 µs in comparison.</p>
| 0 | 2016-10-18T15:22:35Z | 40,112,380 | <p>You could use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow"><code>numpy.unique</code></a>:</p>
<pre><code>In [13]: x = np.array([1, 0, 0, 3, 3, 0])
In [14]: values, cluster_id = np.unique(x, return_inverse=True)
In [15]: values
Out[15]: array([0, 1, 3])
In [16]: cluster_id
Out[16]: array([1, 0, 0, 2, 2, 0])
</code></pre>
<p>(The cluster IDs are assigned in the order of the sorted unique values, not in the order of a value's first appearance in the input.)</p>
<p>Locations of the items in cluster 0:</p>
<pre><code>In [22]: cid = 0
In [23]: values[cid]
Out[23]: 0
In [24]: (cluster_id == cid).nonzero()[0]
Out[24]: array([1, 2, 5])
</code></pre>
| 1 | 2016-10-18T15:33:12Z | [
"python",
"algorithm",
"numpy",
"grouping",
"graph-algorithm"
] |
Efficiently handling duplicates in a Python list | 40,112,139 | <p>I'm looking to compactly represent duplicates in a Python list / 1D numpy array. For instance, say we have </p>
<pre><code> x = np.array([1, 0, 0, 3, 3, 0])
</code></pre>
<p>this array has several duplicate elements, that can be represented with a </p>
<pre><code> group_id = np.array([0, 1, 1, 2, 2, 1])
</code></pre>
<p>so that all duplicates in a given cluster are found with <code>x[group_id==<some_id>]</code>.</p>
<p>The list of duplicate pairs can be efficiently computed with sorting, </p>
<pre><code>s_idx = np.argsort(x)
diff_idx = np.nonzero(x[s_idx[:-1]] == x[s_idx[1:]])[0]
</code></pre>
<p>where the pair <code>s_idx[diff_idx]</code> <-> <code>s_idx[diff_idx+1]</code> correspond to the indices in the original array that are duplicates.
(here <code>array([1, 2, 3])</code> <-> <code>array([2, 5, 4])</code>).</p>
<p>However, I'm not sure how to efficiently calculate <code>cluster_id</code> from this linkage information for large arrays sizes (<code>N > 10â¶</code>).</p>
<p><strong>Edit:</strong> as suggested by <strong>@Chris_Rands</strong>, this can indeed be done with <code>itertools.groupby</code>,</p>
<pre><code> import numpy as np
import itertools
def get_group_id(x):
group_id = np.zeros(x.shape, dtype='int')
for i, j in itertools.groupby(x):
j_el = next(j)
group_id[x==j_el] = i
return group_id
</code></pre>
<p>however the scaling appears to be <strong>O(n^2)</strong>, and this would not scale to my use case (<code>N > 10â¶</code>),</p>
<pre><code> for N in [50000, 100000, 200000]:
%time _ = get_group_id(np.random.randint(0, N, size=N))
CPU times: total: 1.53 s
CPU times: total: 5.83 s
CPU times: total: 23.9 s
</code></pre>
<p>and I belive using the duplicate linkage information would be more efficient as computing duplicate pairs for <code>N=200000</code> takes just 6.44 µs in comparison.</p>
| 0 | 2016-10-18T15:22:35Z | 40,113,174 | <p>Here's an approach using <a href="https://numeric.scipy.org/doc/numpy/reference/generated/numpy.unique.html" rel="nofollow"><code>np.unique</code></a> to keep the order according to the first appearance of a number -</p>
<pre><code>unq, first_idx, ID = np.unique(x,return_index=1,return_inverse=1)
out = first_idx.argsort().argsort()[ID]
</code></pre>
<p>Sample run -</p>
<pre><code>In [173]: x
Out[173]: array([1, 0, 0, 3, 3, 0, 9, 0, 2, 6, 0, 0, 4, 8])
In [174]: unq, first_idx, ID = np.unique(x,return_index=1,return_inverse=1)
In [175]: first_idx.argsort().argsort()[ID]
Out[175]: array([0, 1, 1, 2, 2, 1, 3, 1, 4, 5, 1, 1, 6, 7])
</code></pre>
| 1 | 2016-10-18T16:10:59Z | [
"python",
"algorithm",
"numpy",
"grouping",
"graph-algorithm"
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.