Compare commits

...

116 Commits

Author SHA1 Message Date
Ruud
51e747049d One up 2013-01-07 23:10:42 +01:00
Ruud
0582f7d694 Urlencode spotweb id. fix #1213 2013-01-07 23:10:06 +01:00
Ruud
fa7cac7538 Merge branch 'refs/heads/develop' into desktop 2013-01-07 22:41:55 +01:00
Ruud
ec857a9b3d FTDWorld: Check for login success 2013-01-07 22:31:42 +01:00
Ruud
4d32b0b16d Use FTDWorld temp api. closes #1243 2013-01-07 22:21:44 +01:00
Ruud
ca08287cff Ignore Growl timeout. fixes #1240 2013-01-07 20:54:21 +01:00
Ruud
36fee69843 XBMC notifier for Frodo & Eden 2013-01-06 23:06:38 +01:00
ikkemaniac
c5cae5ab9b add XBMC v11 Eden notifications support
This is my approach on working with Eden, maybe a little late since Frodo is almost released, but better late then never.

- First detect the JSON-RPC version XBMC is running (once per boot of CouchPotatoServer on the first notification, except for sending test message then the JSON version is always checked).
- Set a variable indicating whether or not to use JSON (or normal http).
- If JSON should be used, proceed as before this commit.
- If normal-http should be used, use 'notifyXBMCnoJSON' func
- 'notifyXBMCnoJSON' just opens a specific XBMC api url, unfortunately importing urllib for this was necessary to escape the message strings.

TODO: support multiple XBMC hosts, right now the last host in the hosts array will set the 'useJSONnotifications' var.

Conflicts:

	couchpotato/core/notifications/xbmc/main.py

Conflicts:
	couchpotato/core/notifications/xbmc/main.py
2013-01-06 22:52:59 +01:00
ikkemaniac
9bd5688fb9 Remove services that are not required for couchpotato to run 2013-01-06 22:35:35 +01:00
ikkemaniac
1993c2b6cb Redo FreeBSD init script completely.
Use rc.subr functions and proper rc.conf variables.
2013-01-06 22:35:35 +01:00
ikkemaniac
acc8ed2092 Acutally use config_file variable 2013-01-06 22:35:35 +01:00
ikkemaniac
7b4924dd7a Don't influence the PATH variable in FreeBSD rc script
Don't prepend the PATH variable, it's ugly, unwanted and unnecessary. Call binaries with their full path.
2013-01-06 22:35:35 +01:00
ikkemaniac
3a2861f72a fix FreeBSD init script
-add actual start command
-fix verify_couchpotato_pid function, 'ps' command failed if PID var was empty
-fix verify_couchpotato_pid usage, acutally use the return of verify_couchpotato_pid in the 'stop' routine
2013-01-06 22:35:35 +01:00
Ruud
4779265b43 Change xbmc description 2013-01-06 11:52:05 +01:00
ikkemaniac
f8a46ebe6d clearly state XBMC version dependency for notifications 2013-01-06 11:33:53 +01:00
ikkemaniac
383ec7e6f5 check for XBMC JSON-RPC version and improve logging info 2013-01-06 11:33:53 +01:00
Ruud
dd9118292d Newznab log error 2013-01-06 11:20:13 +01:00
Ruud
4d0f8eb4ac Default add top25 to itunes automation 2013-01-04 20:26:36 +01:00
Ruud
637b21cc68 iTunes automation cleanup 2013-01-04 20:20:52 +01:00
Joseph Gardner
da429f0cb8 Adding itunes automation provider 2013-01-04 20:15:30 +01:00
Ruud
41c2845328 Toasty cleanup 2013-01-04 20:14:25 +01:00
Travis La Marr
c2453bb070 Added Windows Phone SuperToasty Notifier 2013-01-04 19:05:58 +01:00
Ruud
a3a2c8da8e Typo 2013-01-04 19:03:36 +01:00
Ruud
a1d4bab793 NZBVortex: Delete failed option 2013-01-02 14:11:40 +01:00
Ruud
d314a9b5b3 Also check status on manual 2013-01-02 14:11:11 +01:00
Ruud
9a60f6001a Check snatched on startup 2013-01-02 14:10:41 +01:00
Ruud
96a39dbf60 Link to downloaders 2013-01-02 13:52:59 +01:00
Ruud
015675750c Properly use imdb_results 2013-01-02 13:43:24 +01:00
Ruud
bf4dc62f54 NZBVortex support. closes #1204 2013-01-02 13:31:14 +01:00
Ruud
c2382ade05 Use provider for downloader 2013-01-02 13:29:44 +01:00
Ruud
2f65545086 Extend opener with multipart 2013-01-02 13:29:28 +01:00
Ruud
3aea2cd968 Simpler CP tag regex 2013-01-02 13:29:08 +01:00
Ruud
f30cb9185c Add nzbgeek to the defaults of newznab 2013-01-02 10:40:08 +01:00
Ruud
615468e8e6 Make nzbget password a password field. closes #1205 2012-12-31 21:31:11 +01:00
Ruud
0cbee01024 Don't use unicode when not needed in urlopen 2012-12-31 13:10:11 +01:00
Ruud
c29cb39797 Automation cleanup 2012-12-30 21:43:13 +01:00
Kris Kater
580ff38136 Added moviemeter.nl automation 2012-12-30 20:51:47 +01:00
Sander Boele
6b8bca5491 added path to the freebsd init script 2012-12-30 20:51:24 +01:00
Ruud
e92b5d95ca IOLoop cleanup 2012-12-30 18:40:38 +01:00
Ruud
611a32d110 Add randomstring to each internal api 2012-12-30 18:39:16 +01:00
Ruud
74e4b015a9 Module update: Tornade 2012-12-30 18:38:52 +01:00
Ruud
1e0267cdb5 Change OMGWTF to .org 2012-12-30 11:21:23 +01:00
Ruud
041a206fb4 Rename to OMDBapi 2012-12-29 23:45:21 +01:00
Ruud
12a4d6a995 Send proper user-agent with nzbx.co 2012-12-29 21:23:15 +01:00
Ruud
b14a6c1e63 nzbx description 2012-12-29 20:22:42 +01:00
spion06
7fa08ef9b6 Update init/freebsd to not use perl
When using couchpotato on slimmed down versions of freebsd (freenas for example) sometimes perl is not available. Since the previous parsing of the INI required a "key = value" format it is pretty simple to use awk for this.
2012-12-29 19:24:12 +01:00
Ruud
9a314cfbc4 One up 2012-12-29 00:03:45 +01:00
Ruud
5941d0bf77 Add version to update url 2012-12-29 00:03:36 +01:00
Ruud
d326c1c25c Merge branch 'refs/heads/master' into desktop
Conflicts:
	version.py
2012-12-28 23:31:08 +01:00
Ruud
7e6234298d Merge branch 'refs/heads/develop' 2012-12-28 23:25:40 +01:00
Ruud
5cf4b8b4d3 Binsearch provider 2012-12-28 23:01:35 +01:00
Ruud
6e56072250 Don't migrate when in development 2012-12-28 17:44:02 +01:00
Ruud
917c5552a4 Simplified providers 2012-12-27 19:53:12 +01:00
Ruud
73c5b90232 TorrentDay support. closes #1161 2012-12-25 18:38:37 +01:00
Ruud
fd53ba0637 SceneAccess: Don't add quality to query 2012-12-25 17:43:47 +01:00
Ruud
0ef3906b3d Cleanup 2012-12-25 17:15:09 +01:00
Ruud
5ab0d7a97b Cleanup torrent providers 2012-12-25 14:56:27 +01:00
Ruud
dbbbbb2f84 Module update: dateutil 2012-12-25 11:38:49 +01:00
Ruud
1bfe948a45 Newznab didn't return results 2012-12-24 19:21:43 +01:00
Ruud
0d2dcff7f0 NZB Provider cleanup 2012-12-23 02:48:36 +01:00
Ruud
d4da206f93 Merge branch 'refs/heads/develop' 2012-12-22 16:33:47 +01:00
Ruud
439cda8b63 Newznab age wrong. fix #1171 2012-12-22 16:33:28 +01:00
Ruud
bbe8362b08 Show updating screen instantly. closes #1167 2012-12-21 23:54:41 +01:00
Ruud
985a168724 Merge branch 'refs/heads/develop' 2012-12-21 23:18:00 +01:00
Ruud
5e6aea97f7 Score providers 2012-12-21 23:17:50 +01:00
Ruud
6c7c4c7aba Use same api call for all qualities. closes #1164 2012-12-21 23:17:42 +01:00
Ruud
e2f59f5ff4 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2012-12-21 22:15:18 +01:00
Ruud
b225980ce7 Use pubDate and enclosure length for newznab 2012-12-21 22:14:37 +01:00
Ruud
b8e86b378f NZBx provider 2012-12-20 15:45:04 +01:00
Ruud
031a186d71 NZBx fixes 2012-12-20 15:19:40 +01:00
Ruud
3c04eed218 Added nzbx option 2012-12-20 15:18:46 +01:00
Ruud
17e01689d9 Remove torrage.ws. fix #1157 2012-12-19 16:13:40 +01:00
Ruud
173c6194ed Merge branch 'refs/heads/develop' 2012-12-19 11:12:26 +01:00
Ruud
95c2e992b0 Use trailer naming from settings. closes #936 2012-12-19 11:11:12 +01:00
Ruud
4bffb299af Catch urlerrors. closes #1154 2012-12-19 08:01:35 +01:00
Ruud
a2c4119508 Change PublicHD to .se TLD 2012-12-18 23:36:13 +01:00
Ruud
4e9472f8ee Encode path properly before using it in walk. close #978 2012-12-18 23:18:54 +01:00
Ruud
f7911fe9f3 Remove release on new scan 2012-12-18 14:14:06 +01:00
Ruud
8ffa6a8392 Quality by id 2012-12-18 13:54:13 +01:00
Ruud
382d49f895 Delete release if it has no files 2012-12-17 22:41:56 +01:00
Ruud
570b79a67e Use height with margin to check quality. fix #582 2012-12-17 22:27:07 +01:00
Ruud
e7aafc406f Check if identifier exists before adding release. fix #1048 2012-12-17 21:10:30 +01:00
Ruud
2dcc1e096e Make path safe first 2012-12-17 21:10:04 +01:00
Ruud
9f0746a668 Encoding issues. fix #974 2012-12-17 20:50:58 +01:00
Ruud
d9c437bd7f Fix some torrentleech stuff. closes #1149 2012-12-17 19:55:27 +01:00
Ruud
7079647f87 Also try and find movie name between [] 2012-12-17 18:49:37 +01:00
Ruud
65570ba479 Improve name searching. closes #1137 2012-12-17 18:22:12 +01:00
Ruud
a57ba9026d Year match only 1900-2099 2012-12-17 18:21:15 +01:00
Ruud
63246256ee Don't remove stuff from python cache 2012-12-17 17:10:53 +01:00
Ruud
1ac0dc3bbf Don't show Environment vars when developing 2012-12-17 16:40:15 +01:00
Ruud
bcd23ad10c Merge branch 'refs/heads/develop' 2012-12-17 15:13:00 +01:00
Ruud
342d31b48a Remove ignored words which are part of title. close #1123 2012-12-17 14:08:44 +01:00
Ruud
ea7904ed9a Typo on seeders check. fix #1142 2012-12-17 13:56:11 +01:00
Ruud
ca37c2f018 Merge branch 'develop-renamer' of https://github.com/clinton-hall/CouchPotatoServer into develop 2012-12-17 13:18:23 +01:00
Ruud
5aa2146614 OMGWTFNZBs support. closes #1130
ZOMG BBQ SAUCAGES NOMNOMNOM
2012-12-17 13:07:01 +01:00
Ruud
0fd49a2c67 FTDWorld returned wrong backup category 2012-12-17 13:03:33 +01:00
Ruud
b680d84cba Don't use handler when in desktop build 2012-12-17 12:00:42 +01:00
Ruud
898e6f487d Merge branch 'refs/heads/develop' 2012-12-16 23:52:06 +01:00
clinton-hall
bb7b4cbbed Added try: except for two common errors
Does not fix the errors, but prevents the renamer being stuck as "in progress"
Allows next instance to run.
2012-12-13 19:45:13 -08:00
Ruud
6618c3927c Merge branch 'refs/heads/develop' 2012-12-11 23:15:06 +01:00
Ruud
4b58b40226 Merge branch 'refs/heads/develop' 2012-12-01 11:48:54 +01:00
Ruud
3ecc826629 Merge branch 'refs/heads/develop'
Conflicts:
	version.py
2012-11-11 22:06:48 +01:00
Ruud
32fe3796e4 Merge branch 'refs/heads/develop' 2012-10-26 22:22:47 +02:00
Ruud
359d1aaafa Merge branch 'refs/heads/develop' 2012-10-26 14:54:12 +02:00
Ruud
fb5d336351 Merge branch 'refs/heads/develop' 2012-10-26 14:36:04 +02:00
Ruud
eb30dff986 Merge branch 'refs/heads/develop' 2012-10-13 00:00:44 +02:00
Ruud
9312336962 Merge branch 'refs/heads/develop' 2012-09-24 09:36:59 +02:00
Ruud
ade4338ea6 Merge branch 'refs/heads/develop' 2012-09-16 21:32:16 +02:00
Ruud
55b20324c0 Merge branch 'refs/heads/develop' 2012-09-16 12:36:48 +02:00
Ruud
c0fb28301d Merge branch 'refs/heads/develop'
Conflicts:
	version.py
2012-09-16 10:46:39 +02:00
Ruud
f9c2503f81 Merge branch 'refs/heads/develop' 2012-09-14 13:15:35 +02:00
Ruud
5b4cdf05b1 Merge branch 'refs/heads/develop' 2012-09-14 13:06:56 +02:00
Ruud
6f25a6bdfd Merge branch 'refs/heads/develop' 2012-09-03 10:32:09 +02:00
Ruud
23427e95f7 Merge branch 'refs/heads/develop' 2012-08-26 23:09:51 +02:00
Ruud
90a09e573b Merge branch 'refs/heads/develop'
Conflicts:
	couchpotato/core/_base/updater/main.py
2012-08-05 16:15:53 +02:00
Ruud
e1d7440b9d Wrong branch in master 2012-07-15 00:23:44 +02:00
102 changed files with 2547 additions and 1435 deletions

View File

@@ -1,5 +1,6 @@
from esky.util import appdir_from_executable #@UnresolvedImport
from threading import Thread
from version import VERSION
from wx.lib.softwareupdate import SoftwareUpdate
import os
import sys
@@ -165,7 +166,7 @@ class CouchPotatoApp(wx.App, SoftwareUpdate):
def OnInit(self):
# Updater
base_url = 'http://couchpota.to/updates/'
base_url = 'http://couchpota.to/updates/%s/' % VERSION
self.InitUpdates(base_url, base_url + 'changelog.html',
icon = wx.Icon('icon.png'))

View File

@@ -1,6 +1,5 @@
from flask.blueprints import Blueprint
from flask.helpers import url_for
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, asynchronous
from werkzeug.utils import redirect
@@ -11,7 +10,11 @@ api_nonblock = {}
class NonBlockHandler(RequestHandler):
stoppers = []
def __init__(self, application, request, **kwargs):
cls = NonBlockHandler
cls.stoppers = []
super(NonBlockHandler, self).__init__(application, request, **kwargs)
@asynchronous
def get(self, route):

View File

@@ -53,7 +53,8 @@ class Core(Plugin):
addEvent('setting.save.core.api_key', self.checkApikey)
# Make sure we can close-down with ctrl+c properly
self.signalHandler()
if not Env.get('desktop'):
self.signalHandler()
def md5Password(self, value):
return md5(value.encode(Env.get('encoding'))) if value else ''

View File

@@ -90,17 +90,18 @@ var UpdaterBase = new Class({
doUpdate: function(){
var self = this;
App.blockPage('Please wait while CouchPotato is being updated with more awesome stuff.', 'Updating');
Api.request('updater.update', {
'onComplete': function(json){
if(json.success){
if(json.success)
self.updating();
}
else
App.unBlockPage()
}
});
},
updating: function(){
App.blockPage('Please wait while CouchPotato is being updated with more awesome stuff.', 'Updating');
App.checkAvailable.delay(500, App, [1000, function(){
window.location.reload();
}]);

View File

@@ -1,20 +1,20 @@
from base64 import b32decode, b16encode
from couchpotato.core.event import addEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.providers.base import Provider
import random
import re
log = CPLog(__name__)
class Downloader(Plugin):
class Downloader(Provider):
type = []
http_time_between_calls = 0
torrent_sources = [
'http://torrage.com/torrent/%s.torrent',
'http://torrage.ws/torrent/%s.torrent',
'http://torcache.net/torrent/%s.torrent',
]

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'nzbget',
'label': 'NZBGet',
'description': 'Send NZBs to your NZBGet installation.',
'description': 'Use <a href="http://nzbget.sourceforge.net/Main_Page" target="_blank">NZBGet</a> to download NZBs.',
'options': [
{
'name': 'enabled',
@@ -25,6 +25,7 @@ config = [{
},
{
'name': 'password',
'type': 'password',
'description': 'Default NZBGet password is <i>tegbzn6789</i>',
},
{

View File

@@ -0,0 +1,46 @@
from .main import NZBVortex
def start():
return NZBVortex()
config = [{
'name': 'nzbvortex',
'groups': [
{
'tab': 'downloaders',
'name': 'nzbvortex',
'label': 'NZBVortex',
'description': 'Use <a href="http://www.nzbvortex.com/landing/" target="_blank">NZBVortex</a> to download NZBs.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'https://localhost:4321',
},
{
'name': 'api_key',
'label': 'Api Key',
},
{
'name': 'manual',
'default': False,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -0,0 +1,176 @@
from base64 import b64encode
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import tryUrlencode, ss
from couchpotato.core.helpers.variable import cleanHost
from couchpotato.core.logger import CPLog
from urllib2 import URLError
from uuid import uuid4
import hashlib
import httplib
import json
import socket
import ssl
import sys
import traceback
import urllib2
log = CPLog(__name__)
class NZBVortex(Downloader):
type = ['nzb']
api_level = None
session_id = None
def download(self, data = {}, movie = {}, manual = False, filedata = None):
if self.isDisabled(manual) or not self.isCorrectType(data.get('type')) or not self.getApiLevel():
return
# Send the nzb
try:
nzb_filename = self.createFileName(data, filedata, movie)
self.call('nzb/add', params = {'file': (ss(nzb_filename), filedata)}, multipart = True)
return True
except:
log.error('Something went wrong sending the NZB file: %s', traceback.format_exc())
return False
def getAllDownloadStatus(self):
if self.isDisabled(manual = True):
return False
raw_statuses = self.call('nzb')
statuses = []
for item in raw_statuses.get('nzbs', []):
# Check status
status = 'busy'
if item['state'] == 20:
status = 'completed'
elif item['state'] in [21, 22, 24]:
status = 'failed'
statuses.append({
'id': item['id'],
'name': item['uiTitle'],
'status': status,
'original_status': item['state'],
'timeleft':-1,
})
return statuses
def removeFailed(self, item):
if not self.conf('delete_failed', default = True):
return False
log.info('%s failed downloading, deleting...', item['name'])
try:
self.call('nzb/%s/cancel' % item['id'])
except:
log.error('Failed deleting: %s', traceback.format_exc(0))
return False
return True
def login(self):
nonce = self.call('auth/nonce', auth = False).get('authNonce')
cnonce = uuid4().hex
hashed = b64encode(hashlib.sha256('%s:%s:%s' % (nonce, cnonce, self.conf('api_key'))).digest())
params = {
'nonce': nonce,
'cnonce': cnonce,
'hash': hashed
}
login_data = self.call('auth/login', parameters = params, auth = False)
# Save for later
if login_data.get('loginResult') == 'successful':
self.session_id = login_data.get('sessionID')
return True
log.error('Login failed, please check you api-key')
return False
def call(self, call, parameters = {}, repeat = False, auth = True, *args, **kwargs):
# Login first
if not self.session_id and auth:
self.login()
# Always add session id to request
if self.session_id:
parameters['sessionid'] = self.session_id
params = tryUrlencode(parameters)
url = cleanHost(self.conf('host')) + 'api/' + call
url_opener = urllib2.build_opener(HTTPSHandler())
try:
data = self.urlopen('%s?%s' % (url, params), opener = url_opener, *args, **kwargs)
if data:
return json.loads(data)
except URLError, e:
if hasattr(e, 'code') and e.code == 403:
# Try login and do again
if not repeat:
self.login()
return self.call(call, parameters = parameters, repeat = True, *args, **kwargs)
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return {}
def getApiLevel(self):
if not self.api_level:
url = cleanHost(self.conf('host')) + 'api/app/apilevel'
url_opener = urllib2.build_opener(HTTPSHandler())
try:
data = self.urlopen(url, opener = url_opener, show_error = False)
self.api_level = float(json.loads(data).get('apilevel'))
except URLError, e:
if hasattr(e, 'code') and e.code == 403:
log.error('This version of NZBVortex isn\'t supported. Please update to 2.8.6 or higher')
else:
log.error('NZBVortex doesn\'t seem to be running or maybe the remote option isn\'t enabled yet: %s', traceback.format_exc(1))
return self.api_level
class HTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
def connect(self):
sock = socket.create_connection((self.host, self.port), self.timeout)
if sys.version_info < (2, 6, 7):
if hasattr(self, '_tunnel_host'):
self.sock = sock
self._tunnel()
else:
if self._tunnel_host:
self.sock = sock
self._tunnel()
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ssl_version = ssl.PROTOCOL_TLSv1)
class HTTPSHandler(urllib2.HTTPSHandler):
def https_open(self, req):
return self.do_open(HTTPSConnection, req)

View File

@@ -11,7 +11,7 @@ config = [{
'tab': 'downloaders',
'name': 'pneumatic',
'label': 'Pneumatic',
'description': 'Download the .strm file to a specific folder.',
'description': 'Use <a href="http://forum.xbmc.org/showthread.php?tid=97657" target="_blank">Pneumatic</a> to download .strm files.',
'options': [
{
'name': 'enabled',

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'sabnzbd',
'label': 'Sabnzbd',
'description': 'Send NZBs to your Sabnzbd installation.',
'description': 'Use <a href="http://sabnzbd.org/" target="_blank">SABnzbd</a> to download NZBs.',
'wizard': True,
'options': [
{

View File

@@ -1,5 +1,5 @@
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.encoding import tryUrlencode, ss
from couchpotato.core.helpers.variable import cleanHost, mergeDicts
from couchpotato.core.logger import CPLog
from urllib2 import URLError
@@ -41,7 +41,7 @@ class Sabnzbd(Downloader):
try:
if params.get('mode') is 'addfile':
sab = self.urlopen(url, timeout = 60, params = {'nzbfile': (nzb_filename, filedata)}, multipart = True, show_error = False)
sab = self.urlopen(url, timeout = 60, params = {'nzbfile': (ss(nzb_filename), filedata)}, multipart = True, show_error = False)
else:
sab = self.urlopen(url, timeout = 60, show_error = False)
except URLError:
@@ -65,7 +65,7 @@ class Sabnzbd(Downloader):
return False
def getAllDownloadStatus(self):
if self.isDisabled(manual = False):
if self.isDisabled(manual = True):
return False
log.debug('Checking SABnzbd download status.')

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'synology',
'label': 'Synology',
'description': 'Send torrents to Synology\'s Download Station.',
'description': 'Use <a href="http://www.synology.com/dsm/home_home_applications_download_station.php" target="_blank">Synology Download Station</a> to download.',
'wizard': True,
'options': [
{

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'transmission',
'label': 'Transmission',
'description': 'Send torrents to Transmission.',
'description': 'Use <a href="http://www.transmissionbt.com/" target="_blank">Transmission</a> to download torrents.',
'wizard': True,
'options': [
{

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'utorrent',
'label': 'uTorrent',
'description': 'Send torrents to uTorrent.',
'description': 'Use <a href="http://www.utorrent.com/" target="_blank">uTorrent</a> to download torrents.',
'wizard': True,
'options': [
{

View File

@@ -12,7 +12,7 @@ def runHandler(name, handler, *args, **kwargs):
return handler(*args, **kwargs)
except:
from couchpotato.environment import Env
log.error('Error in event "%s", that wasn\'t caught: %s%s', (name, traceback.format_exc(), Env.all()))
log.error('Error in event "%s", that wasn\'t caught: %s%s', (name, traceback.format_exc(), Env.all() if not Env.get('dev') else ''))
def addEvent(name, handler, priority = 100):
@@ -105,14 +105,14 @@ def fireEvent(name, *args, **kwargs):
# Merge
if options['merge'] and len(results) > 0:
# Dict
if type(results[0]) == dict:
if isinstance(results[0], dict):
merged = {}
for result in results:
merged = mergeDicts(merged, result)
results = merged
# Lists
elif type(results[0]) == list:
elif isinstance(results[0], list):
merged = []
for result in results:
merged += result

View File

@@ -68,7 +68,7 @@ def tryUrlencode(s):
return new[1:]
else:
for letter in toUnicode(s):
for letter in ss(s):
try:
new += quote_plus(letter)
except:

View File

@@ -1,3 +1,4 @@
from couchpotato.core.helpers.encoding import simplifyString, toSafeString
from couchpotato.core.logger import CPLog
import hashlib
import os.path
@@ -153,6 +154,16 @@ def getTitle(library_dict):
log.error('Could not get title for library item: %s', library_dict)
return None
def possibleTitles(raw_title):
titles = []
titles.append(toSafeString(raw_title).lower())
titles.append(raw_title.lower())
titles.append(simplifyString(raw_title))
return list(set(titles))
def randomString(size = 8, chars = string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))

View File

@@ -37,8 +37,11 @@ class Growl(Notification):
)
self.growl.register()
self.registered = True
except:
log.error('Failed register of growl: %s', traceback.format_exc())
except Exception, e:
if 'timed out' in str(e):
self.registered = True
else:
log.error('Failed register of growl: %s', traceback.format_exc())
def notify(self, message = '', data = {}, listener = None):
if self.isDisabled(): return

View File

@@ -0,0 +1,32 @@
from .main import Toasty
def start():
return Toasty()
config = [{
'name': 'toasty',
'groups': [
{
'tab': 'notifications',
'name': 'toasty',
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
},
{
'name': 'api_key',
'label': 'Device ID',
},
{
'name': 'on_snatch',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Also send message when movie is snatched.',
},
],
}
],
}]

View File

@@ -0,0 +1,30 @@
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.notifications.base import Notification
import traceback
log = CPLog(__name__)
class Toasty(Notification):
urls = {
'api': 'http://api.supertoasty.com/notify/%s?%s'
}
def notify(self, message = '', data = {}, listener = None):
if self.isDisabled(): return
data = {
'title': self.default_title,
'text': toUnicode(message),
'sender': toUnicode("CouchPotato"),
'image': 'https://raw.github.com/RuudBurger/CouchPotatoServer/master/couchpotato/static/images/homescreen.png',
}
try:
self.urlopen(self.urls['api'] % (self.conf('api_key'), tryUrlencode(data)), show_error = False)
return True
except:
log.error('Toasty failed: %s', traceback.format_exc())
return False

View File

@@ -10,6 +10,7 @@ config = [{
'tab': 'notifications',
'name': 'xbmc',
'label': 'XBMC',
'description': 'v11 (Eden) and v12 (Frodo)',
'options': [
{
'name': 'enabled',

View File

@@ -4,6 +4,7 @@ from couchpotato.core.notifications.base import Notification
from flask.helpers import json
import base64
import traceback
import urllib
log = CPLog(__name__)
@@ -11,27 +12,147 @@ log = CPLog(__name__)
class XBMC(Notification):
listen_to = ['renamer.after']
use_json_notifications = {}
def notify(self, message = '', data = {}, listener = None):
if self.isDisabled(): return
hosts = splitString(self.conf('host'))
successful = 0
for host in hosts:
response = self.request(host, [
('GUI.ShowNotification', {"title":"CouchPotato", "message":message}),
('VideoLibrary.Scan', {}),
])
if self.use_json_notifications.get(host) is None:
self.getXBMCJSONversion(host, message = message)
if self.use_json_notifications.get(host):
response = self.request(host, [
('GUI.ShowNotification', {'title':self.default_title, 'message':message}),
('VideoLibrary.Scan', {}),
])
else:
response = self.notifyXBMCnoJSON(host, {'title':self.default_title, 'message':message})
response += self.request(host, [('VideoLibrary.Scan', {})])
try:
for result in response:
if result['result'] == "OK":
if (result.get('result') and result['result'] == 'OK'):
successful += 1
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
except:
log.error('Failed parsing results: %s', traceback.format_exc())
return successful == len(hosts) * 2
def getXBMCJSONversion(self, host, message = ''):
success = False
# XBMC JSON-RPC version request
response = self.request(host, [
('JSONRPC.Version', {})
])
for result in response:
if (result.get('result') and type(result['result']['version']).__name__ == 'int'):
# only v2 and v4 return an int object
# v6 (as of XBMC v12(Frodo)) is required to send notifications
xbmc_rpc_version = str(result['result']['version'])
log.debug('XBMC JSON-RPC Version: %s ; Notifications by JSON-RPC only supported for v6 [as of XBMC v12(Frodo)]', xbmc_rpc_version)
# disable JSON use
self.use_json_notifications[host] = False
# send the text message
resp = self.notifyXBMCnoJSON(host, {'title':self.default_title, 'message':message})
for result in resp:
if (result.get('result') and result['result'] == 'OK'):
log.debug('Message delivered successfully!')
success = True
break
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
break
elif (result.get('result') and type(result['result']['version']).__name__ == 'dict'):
# XBMC JSON-RPC v6 returns an array object containing
# major, minor and patch number
xbmc_rpc_version = str(result['result']['version']['major'])
xbmc_rpc_version += '.' + str(result['result']['version']['minor'])
xbmc_rpc_version += '.' + str(result['result']['version']['patch'])
log.debug('XBMC JSON-RPC Version: %s', xbmc_rpc_version)
# ok, XBMC version is supported
self.use_json_notifications[host] = True
# send the text message
resp = self.request(host, [('GUI.ShowNotification', {'title':self.default_title, 'message':message})])
for result in resp:
if (result.get('result') and result['result'] == 'OK'):
log.debug('Message delivered successfully!')
success = True
break
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
break
# error getting version info (we do have contact with XBMC though)
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
log.debug('Use JSON notifications: %s ', self.use_json_notifications)
return success
def notifyXBMCnoJSON(self, host, data):
server = 'http://%s/xbmcCmds/' % host
# title, message [, timeout , image #can be added!]
cmd = "xbmcHttp?command=ExecBuiltIn(Notification('%s','%s'))" % (urllib.quote(data['title']), urllib.quote(data['message']))
server += cmd
# I have no idea what to set to, just tried text/plain and seems to be working :)
headers = {
'Content-Type': 'text/plain',
}
# authentication support
if self.conf('password'):
base64string = base64.encodestring('%s:%s' % (self.conf('username'), self.conf('password'))).replace('\n', '')
headers['Authorization'] = 'Basic %s' % base64string
try:
log.debug('Sending non-JSON-type request to %s: %s', (host, data))
# response wil either be 'OK':
# <html>
# <li>OK
# </html>
#
# or 'Error':
# <html>
# <li>Error:<message>
# </html>
#
response = self.urlopen(server, headers = headers)
if 'OK' in response:
log.debug('Returned from non-JSON-type request %s: %s', (host, response))
# manually fake expected response array
return [{'result': 'OK'}]
else:
log.error('Returned from non-JSON-type request %s: %s', (host, response))
# manually fake expected response array
return [{'result': 'Error'}]
except:
log.error('Failed sending non-JSON-type request to XBMC: %s', traceback.format_exc())
return [{'result': 'Error'}]
def request(self, host, requests):
server = 'http://%s/jsonrpc' % host

View File

@@ -12,7 +12,7 @@ class Automation(Plugin):
fireEvent('schedule.interval', 'automation.add_movies', self.addMovies, hours = self.conf('hour', default = 12))
if not Env.get('dev'):
if Env.get('dev'):
addEvent('app.load', self.addMovies)
def addMovies(self):

View File

@@ -1,9 +1,8 @@
from StringIO import StringIO
from couchpotato import addView
from couchpotato.core.event import fireEvent, addEvent
from couchpotato.core.helpers.encoding import tryUrlencode, simplifyString, ss, \
toSafeString
from couchpotato.core.helpers.variable import getExt
from couchpotato.core.helpers.encoding import tryUrlencode, ss, toSafeString
from couchpotato.core.helpers.variable import getExt, md5
from couchpotato.core.logger import CPLog
from couchpotato.environment import Env
from flask.templating import render_template_string
@@ -99,6 +98,7 @@ class Plugin(object):
# http request
def urlopen(self, url, timeout = 30, params = None, headers = None, opener = None, multipart = False, show_error = True):
url = ss(url)
if not headers: headers = {}
if not params: params = {}
@@ -130,8 +130,11 @@ class Plugin(object):
log.info('Opening multipart url: %s, params: %s', (url, [x for x in params.iterkeys()] if isinstance(params, dict) else 'with data'))
request = urllib2.Request(url, params, headers)
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies), MultipartPostHandler)
if opener:
opener.add_handler(MultipartPostHandler())
else:
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies), MultipartPostHandler)
response = opener.open(request, timeout = timeout)
else:
@@ -222,7 +225,7 @@ class Plugin(object):
def getCache(self, cache_key, url = None, **kwargs):
cache_key = simplifyString(cache_key)
cache_key = md5(ss(cache_key))
cache = Env.get('cache').get(cache_key)
if cache:
if not Env.get('dev'): log.debug('Getting cache %s', cache_key)
@@ -242,9 +245,11 @@ class Plugin(object):
self.setCache(cache_key, data, timeout = cache_timeout)
return data
except:
if not kwargs.get('show_error'):
if not kwargs.get('show_error', True):
raise
return ''
def setCache(self, cache_key, value, timeout = 300):
log.debug('Setting cache %s', cache_key)
Env.get('cache').set(cache_key, value, timeout)

View File

@@ -66,10 +66,12 @@ class FileManager(Plugin):
time.sleep(3)
log.debug('Cleaning up unused files')
python_cache = Env.get('cache')._path
try:
db = get_session()
for root, dirs, walk_files in os.walk(Env.get('cache_dir')):
for filename in walk_files:
if root == python_cache: continue
file_path = os.path.join(root, filename)
f = db.query(File).filter(File.path == toUnicode(file_path)).first()
if not f:

View File

@@ -115,12 +115,35 @@ class Manage(Plugin):
if done_movie['library']['identifier'] not in added_identifiers:
fireEvent('movie.delete', movie_id = done_movie['id'], delete_from = 'all')
else:
for release in done_movie.get('releases', []):
for release_file in release.get('files', []):
# Remove release not available anymore
if not os.path.isfile(ss(release_file['path'])):
fireEvent('release.clean', release['id'])
break
if len(release.get('files', [])) == 0:
fireEvent('release.delete', release['id'])
else:
for release_file in release.get('files', []):
# Remove release not available anymore
if not os.path.isfile(ss(release_file['path'])):
fireEvent('release.clean', release['id'])
break
# Check if there are duplicate releases (different quality) use the last one, delete the rest
if len(done_movie.get('releases', [])) > 1:
used_files = {}
for release in done_movie.get('releases', []):
for release_file in release.get('files', []):
already_used = used_files.get(release_file['path'])
if already_used:
print already_used, release['id']
if already_used < release['id']:
fireEvent('release.delete', release['id'], single = True) # delete this one
else:
fireEvent('release.delete', already_used, single = True) # delete previous one
break
else:
used_files[release_file['path']] = release.get('id')
del used_files
Env.prop('manage.last_update', time.time())
except:
@@ -153,7 +176,7 @@ class Manage(Plugin):
'to_go': total_found,
}
if group['library']:
if group['library'] and group['library'].get('identifier'):
identifier = group['library'].get('identifier')
added_identifiers.append(identifier)
@@ -187,5 +210,5 @@ class Manage(Plugin):
groups = fireEvent('scanner.scan', folder = folder, files = files, single = True)
for group in groups.itervalues():
if group['library']:
if group['library'] and group['library'].get('identifier'):
fireEvent('release.add', group = group)

View File

@@ -7,6 +7,7 @@ from couchpotato.core.helpers.variable import mergeDicts, md5, getExt
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.settings.model import Quality, Profile, ProfileType
from sqlalchemy.sql.expression import or_
import os.path
import re
import time
@@ -18,8 +19,8 @@ class QualityPlugin(Plugin):
qualities = [
{'identifier': 'bd50', 'hd': True, 'size': (15000, 60000), 'label': 'BR-Disk', 'alternative': ['bd25'], 'allow': ['1080p'], 'ext':[], 'tags': ['bdmv', 'certificate', ('complete', 'bluray')]},
{'identifier': '1080p', 'hd': True, 'size': (5000, 20000), 'label': '1080P', 'width': 1920, 'alternative': [], 'allow': [], 'ext':['mkv', 'm2ts'], 'tags': ['m2ts']},
{'identifier': '720p', 'hd': True, 'size': (3500, 10000), 'label': '720P', 'width': 1280, 'alternative': [], 'allow': [], 'ext':['mkv', 'ts']},
{'identifier': '1080p', 'hd': True, 'size': (5000, 20000), 'label': '1080P', 'width': 1920, 'height': 1080, 'alternative': [], 'allow': [], 'ext':['mkv', 'm2ts'], 'tags': ['m2ts']},
{'identifier': '720p', 'hd': True, 'size': (3500, 10000), 'label': '720P', 'width': 1280, 'height': 720, 'alternative': [], 'allow': [], 'ext':['mkv', 'ts']},
{'identifier': 'brrip', 'hd': True, 'size': (700, 7000), 'label': 'BR-Rip', 'alternative': ['bdrip'], 'allow': ['720p'], 'ext':['avi']},
{'identifier': 'dvdr', 'size': (3000, 10000), 'label': 'DVD-R', 'alternative': [], 'allow': [], 'ext':['iso', 'img'], 'tags': ['pal', 'ntsc', 'video_ts', 'audio_ts']},
{'identifier': 'dvdrip', 'size': (600, 2400), 'label': 'DVD-Rip', 'width': 720, 'alternative': ['dvdrip'], 'allow': [], 'ext':['avi', 'mpg', 'mpeg'], 'tags': [('dvd', 'rip'), ('dvd', 'xvid'), ('dvd', 'divx')]},
@@ -76,7 +77,7 @@ class QualityPlugin(Plugin):
db = get_session()
quality_dict = {}
quality = db.query(Quality).filter_by(identifier = identifier).first()
quality = db.query(Quality).filter(or_(Quality.identifier == identifier, Quality.id == identifier)).first()
if quality:
quality_dict = dict(self.getQuality(quality.identifier), **quality.to_dict())
@@ -198,9 +199,14 @@ class QualityPlugin(Plugin):
for quality in self.all():
# Last check on resolution only
if quality.get('width', 480) == extra.get('resolution_width', 0):
log.debug('Found %s via resolution_width: %s == %s', (quality['identifier'], quality.get('width', 480), extra.get('resolution_width', 0)))
# Check width resolution, range 20
if (quality.get('width', 720) - 20) <= extra.get('resolution_width', 0) <= (quality.get('width', 720) + 20):
log.debug('Found %s via resolution_width: %s == %s', (quality['identifier'], quality.get('width', 720), extra.get('resolution_width', 0)))
return self.setCache(hash, quality)
# Check height resolution, range 20
if (quality.get('height', 480) - 20) <= extra.get('resolution_height', 0) <= (quality.get('height', 480) + 20):
log.debug('Found %s via resolution_height: %s == %s', (quality['identifier'], quality.get('height', 480), extra.get('resolution_height', 0)))
return self.setCache(hash, quality)
if 480 <= extra.get('resolution_width', 0) <= 720:

View File

@@ -133,6 +133,9 @@ class Release(Plugin):
db.delete(release_file)
db.commit()
if len(rel.files) == 0:
self.delete(id)
return True
return False

View File

@@ -133,13 +133,6 @@ config = [{
'type': 'choice',
'options': rename_options
},
{
'name': 'trailer_name',
'label': 'Trailer naming',
'default': '<filename>-trailer.<ext>',
'type': 'choice',
'options': rename_options
},
],
},
],

View File

@@ -33,6 +33,7 @@ class Renamer(Plugin):
addEvent('renamer.check_snatched', self.checkSnatched)
addEvent('app.load', self.scan)
addEvent('app.load', self.checkSnatched)
if self.conf('run_every') > 0:
fireEvent('schedule.interval', 'renamer.check_snatched', self.checkSnatched, minutes = self.conf('run_every'))
@@ -313,7 +314,10 @@ class Renamer(Plugin):
elif release.status_id is snatched_status.get('id'):
if release.quality.id is group['meta_data']['quality']['id']:
log.debug('Marking release as downloaded')
release.status_id = downloaded_status.get('id')
try:
release.status_id = downloaded_status.get('id')
except Exception, e:
log.error('Failed marking release as finished: %s %s', (e, traceback.format_exc()))
db.commit()
# Remove leftover files
@@ -337,6 +341,7 @@ class Renamer(Plugin):
log.info('Removing "%s"', src)
try:
src = ss(src)
if os.path.isfile(src):
os.remove(src)
@@ -350,7 +355,10 @@ class Renamer(Plugin):
# Delete leftover folder from older releases
for delete_folder in delete_folders:
self.deleteEmptyFolder(delete_folder, show_error = False)
try:
self.deleteEmptyFolder(delete_folder, show_error = False)
except Exception, e:
log.error('Failed to delete folder: %s %s', (e, traceback.format_exc()))
# Rename all files marked
group['renamed_files'] = []
@@ -491,6 +499,7 @@ class Renamer(Plugin):
return string.replace(' ', ' ').replace(' .', '.')
def deleteEmptyFolder(self, folder, show_error = True):
folder = ss(folder)
loge = log.error if show_error else log.debug
for root, dirs, files in os.walk(folder):

View File

@@ -89,7 +89,7 @@ class Scanner(Plugin):
'()([ab])(\.....?)$' #*a.mkv
]
cp_imdb = '(\.cp\((?P<id>tt[0-9{7}]+)\))'
cp_imdb = '(.cp.(?P<id>tt[0-9{7}]+).)'
def __init__(self):
@@ -341,7 +341,7 @@ class Scanner(Plugin):
group['files']['movie'] = self.getMediaFiles(group['unsorted_files'])
if len(group['files']['movie']) == 0:
log.error('Couldn\t find any movie files for %s', identifier)
log.error('Couldn\'t find any movie files for %s', identifier)
continue
log.debug('Getting metadata for %s', identifier)
@@ -421,7 +421,7 @@ class Scanner(Plugin):
if not data['quality']:
data['quality'] = fireEvent('quality.single', 'dvdr' if group['is_dvd'] else 'dvdrip', single = True)
data['quality_type'] = 'HD' if data.get('resolution_width', 0) >= 1280 else 'SD'
data['quality_type'] = 'HD' if data.get('resolution_width', 0) >= 1280 or data['quality'].get('hd') else 'SD'
filename = re.sub('(.cp\(tt[0-9{7}]+\))', '', files[0])
data['group'] = self.getGroup(filename[len(folder):])
@@ -775,7 +775,7 @@ class Scanner(Plugin):
return None
def findYear(self, text):
matches = re.search('(?P<year>[12]{1}[0-9]{3})', text)
matches = re.search('(?P<year>19[0-9]{2}|20[0-9]{2})', text)
if matches:
return matches.group('year')

View File

@@ -27,9 +27,9 @@ class Score(Plugin):
score += sizeScore(nzb['size'])
# Torrents only
if nzb.get('seeds'):
if nzb.get('seeders'):
try:
score += nzb.get('seeds') / 5
score += nzb.get('seeders') / 5
score += nzb.get('leechers') / 10
except:
pass

View File

@@ -116,7 +116,7 @@ def sizeScore(size):
def providerScore(provider):
if provider in ['NZBMatrix', 'Nzbs', 'Newzbin']:
if provider in ['OMGWTFNZBs', 'PassThePopcorn', 'SceneAccess', 'TorrentLeech']:
return 20
if provider in ['Newznab']:

View File

@@ -3,7 +3,8 @@ from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.helpers.encoding import simplifyString, toUnicode
from couchpotato.core.helpers.request import jsonified, getParam
from couchpotato.core.helpers.variable import md5, getTitle, splitString
from couchpotato.core.helpers.variable import md5, getTitle, splitString, \
possibleTitles
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.settings.model import Movie, Release, ReleaseInfo
@@ -297,7 +298,7 @@ class Searcher(Plugin):
imdb_results = kwargs.get('imdb_results', False)
retention = Env.setting('retention', section = 'nzb')
if nzb.get('seeds') is None and 0 < retention < nzb.get('age', 0):
if nzb.get('seeders') is None and 0 < retention < nzb.get('age', 0):
log.info2('Wrong: Outside retention, age is %s, needs %s or lower: %s', (nzb['age'], retention, nzb['name']))
return False
@@ -317,16 +318,16 @@ class Searcher(Plugin):
return False
ignored_words = splitString(self.conf('ignored_words').lower())
blacklisted = list(set(nzb_words) & set(ignored_words))
blacklisted = list(set(nzb_words) & set(ignored_words) - set(movie_words))
if self.conf('ignored_words') and blacklisted:
log.info2("Wrong: '%s' blacklisted words: %s" % (nzb['name'], ", ".join(blacklisted)))
return False
pron_tags = ['xxx', 'sex', 'anal', 'tits', 'fuck', 'porn', 'orgy', 'milf', 'boobs', 'erotica', 'erotic']
for p_tag in pron_tags:
if p_tag in nzb_words and p_tag not in movie_words:
log.info('Wrong: %s, probably pr0n', (nzb['name']))
return False
pron_words = list(set(nzb_words) & set(pron_tags) - set(movie_words))
if pron_words:
log.info('Wrong: %s, probably pr0n', (nzb['name']))
return False
#qualities = fireEvent('quality.all', single = True)
preferred_quality = fireEvent('quality.single', identifier = quality['identifier'], single = True)
@@ -362,20 +363,21 @@ class Searcher(Plugin):
return True
# Check if nzb contains imdb link
if self.checkIMDB([nzb['description']], movie['library']['identifier']):
if self.checkIMDB([nzb.get('description', '')], movie['library']['identifier']):
return True
for movie_title in movie['library']['titles']:
movie_words = re.split('\W+', simplifyString(movie_title['title']))
for raw_title in movie['library']['titles']:
for movie_title in possibleTitles(raw_title['title']):
movie_words = re.split('\W+', simplifyString(movie_title))
if self.correctName(nzb['name'], movie_title['title']):
# if no IMDB link, at least check year range 1
if len(movie_words) > 2 and self.correctYear([nzb['name']], movie['library']['year'], 1):
return True
if self.correctName(nzb['name'], movie_title):
# if no IMDB link, at least check year range 1
if len(movie_words) > 2 and self.correctYear([nzb['name']], movie['library']['year'], 1):
return True
# if no IMDB link, at least check year
if len(movie_words) <= 2 and self.correctYear([nzb['name']], movie['library']['year'], 0):
return True
# if no IMDB link, at least check year
if len(movie_words) <= 2 and self.correctYear([nzb['name']], movie['library']['year'], 0):
return True
log.info("Wrong: %s, undetermined naming. Looking for '%s (%s)'" % (nzb['name'], movie_name, movie['library']['year']))
return False
@@ -444,12 +446,16 @@ class Searcher(Plugin):
def correctName(self, check_name, movie_name):
check_names = [check_name]
try:
check_names.append(re.search(r'([\'"])[^\1]*\1', check_name).group(0))
except:
pass
for check_name in check_names:
# Match names between "
try: check_names.append(re.search(r'([\'"])[^\1]*\1', check_name).group(0))
except: pass
# Match longest name between []
try: check_names.append(max(check_name.split('['), key = len))
except: pass
for check_name in list(set(check_names)):
check_movie = fireEvent('scanner.name_year', check_name, single = True)
try:

View File

@@ -24,6 +24,13 @@ config = [{
'type': 'dropdown',
'values': [('1080P', '1080p'), ('720P', '720p'), ('480P', '480p')],
},
{
'name': 'name',
'label': 'Naming',
'default': '<filename>-trailer',
'advanced': True,
'description': 'Use <filename> to use above settings.'
},
],
},
],

View File

@@ -19,10 +19,11 @@ class Trailer(Plugin):
trailers = fireEvent('trailer.search', group = group, merge = True)
if not trailers or trailers == []:
log.info('No trailers found for: %s', getTitle(group['library']))
return
return False
for trailer in trailers.get(self.conf('quality'), []):
destination = '%s-trailer.%s' % (self.getRootName(group), getExt(trailer))
filename = self.conf('name').replace('<filename>', group['filename']) + ('.%s' % getExt(trailer))
destination = os.path.join(group['destination_dir'], filename)
if not os.path.isfile(destination):
fireEvent('file.download', url = trailer, dest = destination, urlopen_kwargs = {'headers': {'User-Agent': 'Quicktime'}}, single = True)
else:
@@ -33,5 +34,5 @@ class Trailer(Plugin):
# Download first and break
break
def getRootName(self, data = {}):
return os.path.join(data['destination_dir'], data['filename'])
return True

View File

@@ -1,13 +1,13 @@
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.providers.base import Provider
from couchpotato.environment import Env
import time
log = CPLog(__name__)
class Automation(Plugin):
class Automation(Provider):
enabled_option = 'automation_enabled'
@@ -19,6 +19,9 @@ class Automation(Plugin):
def _getMovies(self):
if self.isDisabled():
return
if not self.canCheck():
log.debug('Just checked, skipping %s', self.getName())
return []

View File

@@ -1,8 +1,7 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5, tryInt
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -14,32 +13,24 @@ class Bluray(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
cache_key = 'bluray.%s' % md5(self.rss_url)
rss_data = self.getCache(cache_key, self.rss_url)
data = XMLTree.fromstring(rss_data)
rss_movies = self.getRSSData(self.rss_url)
if data is not None:
rss_movies = self.getElements(data, 'channel/item')
for movie in rss_movies:
name = self.getTextElement(movie, 'title').lower().split('blu-ray')[0].strip('(').rstrip()
year = self.getTextElement(movie, 'description').split('|')[1].strip('(').strip()
for movie in rss_movies:
name = self.getTextElement(movie, "title").lower().split("blu-ray")[0].strip("(").rstrip()
year = self.getTextElement(movie, "description").split("|")[1].strip("(").strip()
if not name.find('/') == -1: # make sure it is not a double movie release
continue
if not name.find("/") == -1: # make sure it is not a double movie release
continue
if tryInt(year) < self.getMinimal('year'):
continue
if tryInt(year) < self.getMinimal('year'):
continue
imdb = self.search(name, year)
imdb = self.search(name, year)
if imdb:
if self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
if imdb:
if self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
return movies

View File

@@ -8,7 +8,4 @@ class CP(Automation):
def getMovies(self):
if self.isDisabled():
return
return []

View File

@@ -1,5 +1,5 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5, getImdb, splitString, tryInt
from couchpotato.core.helpers.variable import getImdb, splitString, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import traceback
@@ -13,9 +13,6 @@ class IMDB(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
enablers = [tryInt(x) for x in splitString(self.conf('automation_urls_use'))]
@@ -29,8 +26,7 @@ class IMDB(Automation, RSS):
continue
try:
cache_key = 'imdb.rss.%s' % md5(url)
rss_data = self.getCache(cache_key, url)
rss_data = self.getHTMLData(url)
imdbs = getImdb(rss_data, multiple = True)
for imdb in imdbs:

View File

@@ -0,0 +1,35 @@
from .main import ITunes
def start():
return ITunes()
config = [{
'name': 'itunes',
'groups': [
{
'tab': 'automation',
'name': 'itunes_automation',
'label': 'iTunes',
'description': 'From any <a href="http://itunes.apple.com/rss">iTunes</a> Store feed. Url should be the RSS link. (uses minimal requirements)',
'options': [
{
'name': 'automation_enabled',
'default': False,
'type': 'enabler',
},
{
'name': 'automation_urls_use',
'label': 'Use',
'default': ',',
},
{
'name': 'automation_urls',
'label': 'url',
'type': 'combined',
'combine': ['automation_urls_use', 'automation_urls'],
'default': 'https://itunes.apple.com/rss/topmovies/limit=25/xml,',
},
],
},
],
}]

View File

@@ -0,0 +1,63 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5, splitString, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
from xml.etree.ElementTree import QName
import datetime
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
class ITunes(Automation, RSS):
interval = 1800
def getIMDBids(self):
if self.isDisabled():
return
movies = []
enablers = [tryInt(x) for x in splitString(self.conf('automation_urls_use'))]
urls = splitString(self.conf('automation_urls'))
namespace = 'http://www.w3.org/2005/Atom'
namespaceIM = 'http://itunes.apple.com/rss'
index = -1
for url in urls:
index += 1
if not enablers[index]:
continue
try:
cache_key = 'itunes.rss.%s' % md5(url)
rss_data = self.getCache(cache_key, url)
data = XMLTree.fromstring(rss_data)
if data is not None:
entry_tag = str(QName(namespace, 'entry'))
rss_movies = self.getElements(data, entry_tag)
for movie in rss_movies:
name_tag = str(QName(namespaceIM, 'name'))
name = self.getTextElement(movie, name_tag)
releaseDate_tag = str(QName(namespaceIM, 'releaseDate'))
releaseDateText = self.getTextElement(movie, releaseDate_tag)
year = datetime.datetime.strptime(releaseDateText, '%Y-%m-%dT00:00:00-07:00').strftime("%Y")
imdb = self.search(name, year)
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
except:
log.error('Failed loading iTunes rss feed: %s %s', (url, traceback.format_exc()))
return movies

View File

@@ -1,9 +1,7 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import datetime
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -15,25 +13,17 @@ class Kinepolis(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
cache_key = 'kinepolis.%s' % md5(self.rss_url)
rss_data = self.getCache(cache_key, self.rss_url)
data = XMLTree.fromstring(rss_data)
rss_movies = self.getRSSData(self.rss_url)
if data is not None:
rss_movies = self.getElements(data, 'channel/item')
for movie in rss_movies:
name = self.getTextElement(movie, 'title')
year = datetime.datetime.now().strftime('%Y')
for movie in rss_movies:
name = self.getTextElement(movie, "title")
year = datetime.datetime.now().strftime("%Y")
imdb = self.search(name, year)
imdb = self.search(name, year)
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
return movies

View File

@@ -0,0 +1,23 @@
from .main import Moviemeter
def start():
return Moviemeter()
config = [{
'name': 'moviemeter',
'groups': [
{
'tab': 'automation',
'name': 'moviemeter_automation',
'label': 'Moviemeter',
'description': 'Imports movies from the current top 10 of moviemeter.nl. (uses minimal requirements)',
'options': [
{
'name': 'automation_enabled',
'default': False,
'type': 'enabler',
},
],
},
],
}]

View File

@@ -0,0 +1,28 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
log = CPLog(__name__)
class Moviemeter(Automation, RSS):
interval = 1800
rss_url = 'http://www.moviemeter.nl/rss/cinema'
def getIMDBids(self):
movies = []
rss_movies = self.getRSSData(self.rss_url)
for movie in rss_movies:
name_year = fireEvent('scanner.name_year', self.getTextElement(movie, 'title'), single = True)
imdb = self.search(name_year.get('name'), name_year.get('year'))
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
return movies

View File

@@ -1,11 +1,8 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5
from couchpotato.core.helpers.variable import tryInt, splitString
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
from xml.etree.ElementTree import ParseError
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -16,39 +13,27 @@ class MoviesIO(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
enablers = self.conf('automation_urls_use').split(',')
enablers = [tryInt(x) for x in splitString(self.conf('automation_urls_use'))]
index = -1
for rss_url in self.conf('automation_urls').split(','):
for rss_url in splitString(self.conf('automation_urls')):
index += 1
if not enablers[index]:
continue
try:
cache_key = 'imdb.rss.%s' % md5(rss_url)
rss_movies = self.getRSSData(rss_url, headers = {'Referer': ''})
rss_data = self.getCache(cache_key, rss_url, headers = {'Referer': ''})
data = XMLTree.fromstring(rss_data)
rss_movies = self.getElements(data, 'channel/item')
for movie in rss_movies:
for movie in rss_movies:
nameyear = fireEvent('scanner.name_year', self.getTextElement(movie, 'title'), single = True)
imdb = self.search(nameyear.get('name'), nameyear.get('year'), imdb_only = True)
nameyear = fireEvent('scanner.name_year', self.getTextElement(movie, "title"), single = True)
imdb = self.search(nameyear.get('name'), nameyear.get('year'), imdb_only = True)
if not imdb:
continue
if not imdb:
continue
movies.append(imdb)
except ParseError:
log.debug('Failed loading Movies.io watchlist, probably empty: %s', (rss_url))
except:
log.error('Failed loading Movies.io watchlist: %s %s', (rss_url, traceback.format_exc()))
movies.append(imdb)
return movies

View File

@@ -1,9 +1,8 @@
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.variable import md5, sha1
from couchpotato.core.helpers.variable import sha1
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import base64
import json
log = CPLog(__name__)
@@ -25,9 +24,6 @@ class Trakt(Automation):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
for movie in self.getWatchlist():
movies.append(movie.get('imdb_id'))
@@ -38,22 +34,11 @@ class Trakt(Automation):
method = (self.urls['watchlist'] % self.conf('automation_api_key')) + self.conf('automation_username')
return self.call(method)
def call(self, method_url):
try:
if self.conf('automation_password'):
headers = {
'Authorization': 'Basic %s' % base64.encodestring('%s:%s' % (self.conf('automation_username'), self.conf('automation_password')))[:-1]
}
else:
headers = {}
headers = {}
if self.conf('automation_password'):
headers['Authorization'] = 'Basic %s' % base64.encodestring('%s:%s' % (self.conf('automation_username'), self.conf('automation_password')))[:-1]
cache_key = 'trakt.%s' % md5(method_url)
json_string = self.getCache(cache_key, self.urls['base'] + method_url, headers = headers)
if json_string:
return json.loads(json_string)
except:
log.error('Failed to get data from trakt, check your login.')
return []
data = self.getJsonData(self.urls['base'] + method_url, headers = headers)
return data if data else []

View File

@@ -1,14 +1,17 @@
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.variable import tryFloat
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.variable import tryFloat, mergeDicts, md5, \
possibleTitles, getTitle
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
from urlparse import urlparse
import cookielib
import json
import re
import time
import traceback
import urllib2
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -41,6 +44,34 @@ class Provider(Plugin):
return self.is_available.get(host, False)
def getJsonData(self, url, **kwargs):
data = self.getCache(md5(url), url, **kwargs)
if data:
try:
return json.loads(data)
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return []
def getRSSData(self, url, **kwargs):
data = self.getCache(md5(url), url, **kwargs)
if data:
try:
data = XMLTree.fromstring(data)
return self.getElements(data, 'channel/item')
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return []
def getHTMLData(self, url, **kwargs):
return self.getCache(md5(url), url, **kwargs)
class YarrProvider(Provider):
@@ -54,12 +85,8 @@ class YarrProvider(Provider):
def __init__(self):
addEvent('provider.belongs_to', self.belongsTo)
addEvent('%s.search' % self.type, self.search)
addEvent('yarr.search', self.search)
addEvent('nzb.feed', self.feed)
def login(self):
try:
@@ -68,15 +95,20 @@ class YarrProvider(Provider):
urllib2.install_opener(opener)
log.info2('Logging into %s', self.urls['login'])
f = opener.open(self.urls['login'], self.getLoginParams())
f.read()
output = f.read()
f.close()
self.login_opener = opener
return True
if self.loginSuccess(output):
self.login_opener = opener
return True
except:
log.error('Failed to login %s: %s', (self.getName(), traceback.format_exc()))
return False
def loginSuccess(self, output):
return True
def loginDownload(self, url = '', nzb_id = ''):
try:
if not self.login_opener and not self.login():
@@ -96,11 +128,29 @@ class YarrProvider(Provider):
return 'try_next'
def feed(self):
return []
def search(self, movie, quality):
return []
if self.isDisabled():
return []
# Login if needed
if self.urls.get('login') and (not self.login_opener and not self.login()):
log.error('Failed to login to: %s', self.getName())
return []
# Create result container
imdb_results = hasattr(self, '_search')
results = ResultList(self, movie, quality, imdb_results = imdb_results)
# Do search based on imdb id
if imdb_results:
self._search(movie, quality, results)
# Search possible titles
else:
for title in possibleTitles(getTitle(movie['library'])):
self._searchOnTitle(title, movie, quality, results)
return results
def belongsTo(self, url, provider = None, host = None):
try:
@@ -148,10 +198,65 @@ class YarrProvider(Provider):
return [self.cat_backup_id]
def found(self, new):
if not new.get('provider_extra'):
new['provider_extra'] = ''
else:
new['provider_extra'] = ', %s' % new['provider_extra']
log.info('Found: score(%(score)s) on %(provider)s%(provider_extra)s: %(name)s', new)
class ResultList(list):
result_ids = None
provider = None
movie = None
quality = None
def __init__(self, provider, movie, quality, **kwargs):
self.result_ids = []
self.provider = provider
self.movie = movie
self.quality = quality
self.kwargs = kwargs
super(ResultList, self).__init__()
def extend(self, results):
for r in results:
self.append(r)
def append(self, result):
new_result = self.fillResult(result)
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new_result, movie = self.movie, quality = self.quality,
imdb_results = self.kwargs.get('imdb_results', False), single = True)
if is_correct_movie and new_result['id'] not in self.result_ids:
new_result['score'] += fireEvent('score.calculate', new_result, self.movie, single = True)
self.found(new_result)
self.result_ids.append(result['id'])
super(ResultList, self).append(new_result)
def fillResult(self, result):
defaults = {
'id': 0,
'type': self.provider.type,
'provider': self.provider.getName(),
'download': self.provider.download,
'url': '',
'name': '',
'age': 0,
'size': 0,
'description': '',
'score': 0
}
return mergeDicts(defaults, result)
def found(self, new_result):
if not new_result.get('provider_extra'):
new_result['provider_extra'] = ''
else:
new_result['provider_extra'] = ', %s' % new_result['provider_extra']
log.info('Found: score(%(score)s) on %(provider)s%(provider_extra)s: %(name)s', new_result)

View File

@@ -1,6 +0,0 @@
from .main import IMDBAPI
def start():
return IMDBAPI()
config = []

View File

@@ -0,0 +1,6 @@
from .main import OMDBAPI
def start():
return OMDBAPI()
config = []

View File

@@ -10,11 +10,11 @@ import traceback
log = CPLog(__name__)
class IMDBAPI(MovieProvider):
class OMDBAPI(MovieProvider):
urls = {
'search': 'http://www.imdbapi.com/?%s',
'info': 'http://www.imdbapi.com/?i=%s',
'search': 'http://www.omdbapi.com/?%s',
'info': 'http://www.omdbapi.com/?i=%s',
}
http_time_between_calls = 0
@@ -32,7 +32,7 @@ class IMDBAPI(MovieProvider):
'name': q
}
cache_key = 'imdbapi.cache.%s' % q
cache_key = 'omdbapi.cache.%s' % q
cached = self.getCache(cache_key, self.urls['search'] % tryUrlencode({'t': name_year.get('name'), 'y': name_year.get('year', '')}), timeout = 3)
if cached:
@@ -50,7 +50,7 @@ class IMDBAPI(MovieProvider):
if not identifier:
return {}
cache_key = 'imdbapi.cache.%s' % identifier
cache_key = 'omdbapi.cache.%s' % identifier
cached = self.getCache(cache_key, self.urls['info'] % identifier, timeout = 3)
if cached:

View File

@@ -3,6 +3,7 @@ from couchpotato.core.helpers.encoding import simplifyString, toUnicode
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.movie.base import MovieProvider
from libs.themoviedb import tmdb
import traceback
log = CPLog(__name__)
@@ -61,7 +62,12 @@ class TheMovieDb(MovieProvider):
if not results:
log.debug('Searching for movie: %s', q)
raw = tmdb.search(search_string)
raw = None
try:
raw = tmdb.search(search_string)
except:
log.error('Failed searching TMDB for "%s": %s', (search_string, traceback.format_exc()))
results = []
if raw:

View File

@@ -0,0 +1,22 @@
from .main import BinSearch
def start():
return BinSearch()
config = [{
'name': 'binsearch',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'binsearch',
'description': 'Free provider, less accurate. See <a href="https://www.binsearch.info/">BinSearch</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
},
],
},
],
}]

View File

@@ -0,0 +1,99 @@
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
import re
import traceback
log = CPLog(__name__)
class BinSearch(NZBProvider):
urls = {
'download': 'https://www.binsearch.info/fcgi/nzb.fcgi?q=%s',
'detail': 'https://www.binsearch.info%s',
'search': 'https://www.binsearch.info/index.php?%s',
}
http_time_between_calls = 4 # Seconds
def _search(self, movie, quality, results):
q = '%s %s' % (movie['library']['identifier'], quality.get('identifier'))
arguments = tryUrlencode({
'q': q,
'm': 'n',
'max': 250,
'adv_age': Env.setting('retention', 'nzb'),
'adv_sort': 'date',
'adv_col': 'on',
'adv_nfo': 'on',
'minsize': quality.get('size_min'),
'maxsize': quality.get('size_max'),
})
data = self.getHTMLData(self.urls['search'] % arguments)
if data:
try:
html = BeautifulSoup(data)
main_table = html.find('table', attrs = {'id':'r2'})
if not main_table:
return
items = main_table.find_all('tr')
for row in items:
title = row.find('span', attrs = {'class':'s'})
if not title: continue
nzb_id = row.find('input', attrs = {'type':'checkbox'})['name']
info = row.find('span', attrs = {'class':'d'})
size_match = re.search('size:.(?P<size>[0-9\.]+.[GMB]+)', info.text)
def extra_check(item):
parts = re.search('available:.(?P<parts>\d+)./.(?P<total>\d+)', info.text)
total = tryInt(parts.group('total'))
parts = tryInt(parts.group('parts'))
if (total / parts) < 0.95 or ((total / parts) >= 0.95 and not 'par2' in info.text.lower()):
log.info2('Wrong: \'%s\', not complete: %s out of %s', (item['name'], parts, total))
return False
if 'requires password' in info.text.lower():
log.info2('Wrong: \'%s\', passworded', (item['name']))
return False
return True
results.append({
'id': nzb_id,
'name': title.text,
'age': tryInt(re.search('(?P<size>\d+d)', row.find_all('td')[-1:][0].text).group('size')[:-1]),
'size': self.parseSize(size_match.group('size')),
'url': self.urls['download'] % nzb_id,
'detail_url': self.urls['detail'] % info.find('a')['href'],
'extra_check': extra_check
})
except:
log.error('Failed to parse HTML response from BinSearch: %s', traceback.format_exc())
def download(self, url = '', nzb_id = ''):
params = {'action': 'nzb'}
params[nzb_id] = 'on'
try:
return self.urlopen(url, params = params, show_error = False)
except:
log.error('Failed getting nzb from %s: %s', (self.getName(), traceback.format_exc()))
return 'try_next'

View File

@@ -1,14 +1,10 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode, \
simplifyString
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
import re
import time
import traceback
log = CPLog(__name__)
@@ -16,30 +12,26 @@ log = CPLog(__name__)
class FTDWorld(NZBProvider):
urls = {
'search': 'http://ftdworld.net/category.php?%s',
'search': 'http://ftdworld.net/api/index.php?%s',
'detail': 'http://ftdworld.net/spotinfo.php?id=%s',
'download': 'http://ftdworld.net/cgi-bin/nzbdown.pl?fileID=%s',
'login': 'http://ftdworld.net/index.php',
}
http_time_between_calls = 1 #seconds
http_time_between_calls = 3 #seconds
cat_ids = [
([4, 11], ['dvdr']),
([1], ['cam', 'ts', 'dvdrip', 'tc', 'r5', 'scr', 'brrip']),
([10, 13, 14], ['bd50', '720p', '1080p']),
([7, 10, 13, 14], ['bd50', '720p', '1080p']),
]
cat_backup_id = [1]
cat_backup_id = 1
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
q = '"%s" %s' % (title, movie['library']['year'])
q = '%s %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'])
params = {
params = tryUrlencode({
'ctitle': q,
'customQuery': 'usr',
'cage': Env.setting('retention', 'nzb'),
@@ -47,57 +39,31 @@ class FTDWorld(NZBProvider):
'csizemax': quality.get('size_max'),
'ccategory': 14,
'ctype': ','.join([str(x) for x in self.getCatId(quality['identifier'])]),
}
})
cache_key = 'ftdworld.%s.%s' % (movie['library']['identifier'], q)
data = self.getCache(cache_key, self.urls['search'] % tryUrlencode(params), opener = self.login_opener)
data = self.getJsonData(self.urls['search'] % params, opener = self.login_opener)
if data:
try:
html = BeautifulSoup(data)
main_table = html.find('table', attrs = {'id':'ftdresult'})
if data.get('numRes') == 0:
return
if not main_table:
return results
for item in data.get('data'):
items = main_table.find_all('tr', attrs = {'class': re.compile('tcontent')})
for item in items:
tds = item.find_all('td')
nzb_id = tryInt(item.attrs['data-spot'])
up = item.find('img', attrs = {'src': re.compile('up.png')})
down = item.find('img', attrs = {'src': re.compile('down.png')})
new = {
nzb_id = tryInt(item.get('id'))
results.append({
'id': nzb_id,
'type': 'nzb',
'provider': self.getName(),
'name': toUnicode(item.find('a', attrs = {'href': re.compile('./spotinfo')}).text.strip()),
'age': self.calculateAge(int(time.mktime(parse(tds[2].text).timetuple()))),
'size': 0,
'name': toUnicode(item.get('Title')),
'age': self.calculateAge(tryInt(item.get('Created'))),
'url': self.urls['download'] % nzb_id,
'download': self.loginDownload,
'detail_url': self.urls['detail'] % nzb_id,
'description': '',
'score': (tryInt(up.attrs['title'].split(' ')[0]) * 3) - (tryInt(down.attrs['title'].split(' ')[0]) * 3),
}
'score': (tryInt(item.get('webPlus', 0)) - tryInt(item.get('webMin', 0))) * 3,
})
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
new['score'] += fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from NZBClub')
return results
except:
log.error('Failed to parse HTML response from FTDWorld: %s', traceback.format_exc())
def getLoginParams(self):
return tryUrlencode({
@@ -105,3 +71,6 @@ class FTDWorld(NZBProvider):
'passlogin': self.conf('password'),
'submit': 'Log In',
})
def loginSuccess(self, output):
return 'password is incorrect' not in output

View File

@@ -13,7 +13,7 @@ config = [{
'order': 10,
'description': 'Enable <a href="http://newznab.com/" target="_blank">NewzNab providers</a> such as <a href="https://nzb.su" target="_blank">NZB.su</a>, \
<a href="https://nzbs.org" target="_blank">NZBs.org</a>, <a href="http://dognzb.cr/" target="_blank">DOGnzb.cr</a>, \
<a href="https://github.com/spotweb/spotweb" target="_blank">Spotweb</a>',
<a href="https://github.com/spotweb/spotweb" target="_blank">Spotweb</a> or <a href="https://nzbgeek.info/" target="_blank">NZBGeek</a>',
'wizard': True,
'options': [
{
@@ -22,16 +22,16 @@ config = [{
},
{
'name': 'use',
'default': '0,0,0'
'default': '0,0,0,0'
},
{
'name': 'host',
'default': 'nzb.su,dognzb.cr,nzbs.org',
'default': 'nzb.su,dognzb.cr,nzbs.org,https://index.nzbgeek.info',
'description': 'The hostname of your newznab provider',
},
{
'name': 'api_key',
'default': ',,',
'default': ',,,',
'label': 'Api Key',
'description': 'Can be found on your profile page',
'type': 'combined',

View File

@@ -1,8 +1,8 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import cleanHost, splitString
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.base import ResultList
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
@@ -10,7 +10,6 @@ from urllib2 import HTTPError
from urlparse import urlparse
import time
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -25,140 +24,59 @@ class Newznab(NZBProvider, RSS):
limits_reached = {}
cat_ids = [
([2010], ['dvdr']),
([2030], ['cam', 'ts', 'dvdrip', 'tc', 'r5', 'scr']),
([2040], ['720p', '1080p']),
([2050], ['bd50']),
]
cat_backup_id = 2000
http_time_between_calls = 1 # Seconds
def feed(self):
hosts = self.getHosts()
results = []
for host in hosts:
result = self.singleFeed(host)
if result:
results.extend(result)
return results
def singleFeed(self, host):
results = []
if self.isDisabled(host):
return results
arguments = tryUrlencode({
't': self.cat_backup_id,
'r': host['api_key'],
'i': 58,
})
url = "%s?%s" % (cleanHost(host['host']) + 'rss', arguments)
cache_key = 'newznab.%s.feed.%s' % (host['host'], arguments)
results = self.createItems(url, cache_key, host, for_feed = True)
return results
def search(self, movie, quality):
hosts = self.getHosts()
results = []
for host in hosts:
result = self.singleSearch(host, movie, quality)
results = ResultList(self, movie, quality, imdb_results = True)
if result:
results.extend(result)
for host in hosts:
if self.isDisabled(host):
continue
self._searchOnHost(host, movie, quality, results)
return results
def singleSearch(self, host, movie, quality):
def _searchOnHost(self, host, movie, quality, results):
results = []
if self.isDisabled(host):
return results
cat_id = self.getCatId(quality['identifier'])
arguments = tryUrlencode({
'imdbid': movie['library']['identifier'].replace('tt', ''),
'cat': cat_id[0],
'apikey': host['api_key'],
'extended': 1
})
url = "%s&%s" % (self.getUrl(host['host'], self.urls['search']), arguments)
url = '%s&%s' % (self.getUrl(host['host'], self.urls['search']), arguments)
cache_key = 'newznab.%s.%s.%s' % (host['host'], movie['library']['identifier'], cat_id[0])
nzbs = self.getRSSData(url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
results = self.createItems(url, cache_key, host, movie = movie, quality = quality)
for nzb in nzbs:
return results
date = None
for item in nzb:
if item.attrib.get('name') == 'usenetdate':
date = item.attrib.get('value')
break
def createItems(self, url, cache_key, host, movie = None, quality = None, for_feed = False):
results = []
if not date:
date = self.getTextElement(nzb, 'pubDate')
data = self.getCache(cache_key, url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
nzb_id = self.getTextElement(nzb, 'guid').split('/')[-1:].pop()
name = self.getTextElement(nzb, 'title')
results = []
for nzb in nzbs:
if not name:
continue
date = ''
size = 0
for item in nzb:
if item.attrib.get('name') == 'size':
size = item.attrib.get('value')
elif item.attrib.get('name') == 'usenetdate':
date = item.attrib.get('value')
if date is '': log.debug('Date not parsed properly or not available for %s: %s', (host['host'], self.getTextElement(nzb, "title")))
if size is 0: log.debug('Size not parsed properly or not available for %s: %s', (host['host'], self.getTextElement(nzb, "title")))
id = self.getTextElement(nzb, "guid").split('/')[-1:].pop()
new = {
'id': id,
'provider': self.getName(),
'provider_extra': host['host'],
'type': 'nzb',
'name': self.getTextElement(nzb, "title"),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': int(size) / 1024 / 1024,
'url': (self.getUrl(host['host'], self.urls['download']) % id) + self.getApiExt(host),
'download': self.download,
'detail_url': '%sdetails/%s' % (cleanHost(host['host']), id),
'content': self.getTextElement(nzb, "description"),
}
if not for_feed:
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
else:
results.append(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from Newznab: %s', host)
return results
results.append({
'id': nzb_id,
'provider_extra': host['host'],
'name': self.getTextElement(nzb, 'title'),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': int(self.getElement(nzb, 'enclosure').attrib['length']) / 1024 / 1024,
'url': (self.getUrl(host['host'], self.urls['download']) % tryUrlencode(nzb_id)) + self.getApiExt(host),
'detail_url': '%sdetails/%s' % (cleanHost(host['host']), tryUrlencode(nzb_id)),
'content': self.getTextElement(nzb, 'description'),
})
def getHosts(self):
@@ -218,6 +136,6 @@ class Newznab(NZBProvider, RSS):
self.limits_reached[host] = time.time()
return 'try_next'
log.error('Failed download from %s', (host, traceback.format_exc()))
log.error('Failed download from %s: %s', (host, traceback.format_exc()))
return 'try_next'

View File

@@ -1,15 +1,11 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode, \
simplifyString
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
import time
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -22,80 +18,48 @@ class NZBClub(NZBProvider, RSS):
http_time_between_calls = 4 #seconds
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
q = '"%s %s" %s' % (title, movie['library']['year'], quality.get('identifier'))
q = '"%s %s" %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'], quality.get('identifier'))
params = {
params = tryUrlencode({
'q': q,
'ig': '1',
'rpp': 200,
'st': 1,
'sp': 1,
'ns': 1,
}
})
cache_key = 'nzbclub.%s.%s.%s' % (movie['library']['identifier'], quality.get('identifier'), q)
data = self.getCache(cache_key, self.urls['search'] % tryUrlencode(params))
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
nzbs = self.getRSSData(self.urls['search'] % params)
for nzb in nzbs:
for nzb in nzbs:
nzbclub_id = tryInt(self.getTextElement(nzb, "link").split('/nzb_view/')[1].split('/')[0])
enclosure = self.getElement(nzb, "enclosure").attrib
size = enclosure['length']
date = self.getTextElement(nzb, "pubDate")
nzbclub_id = tryInt(self.getTextElement(nzb, "link").split('/nzb_view/')[1].split('/')[0])
enclosure = self.getElement(nzb, "enclosure").attrib
size = enclosure['length']
date = self.getTextElement(nzb, "pubDate")
def extra_check(item):
full_description = self.getCache('nzbclub.%s' % nzbclub_id, item['detail_url'], cache_timeout = 25920000)
def extra_check(item):
full_description = self.getCache('nzbclub.%s' % nzbclub_id, item['detail_url'], cache_timeout = 25920000)
for ignored in ['ARCHIVE inside ARCHIVE', 'Incomplete', 'repair impossible']:
if ignored in full_description:
log.info('Wrong: Seems to be passworded or corrupted files: %s', new['name'])
return False
for ignored in ['ARCHIVE inside ARCHIVE', 'Incomplete', 'repair impossible']:
if ignored in full_description:
log.info('Wrong: Seems to be passworded or corrupted files: %s', item['name'])
return False
return True
return True
new = {
'id': nzbclub_id,
'type': 'nzb',
'provider': self.getName(),
'name': toUnicode(self.getTextElement(nzb, "title")),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': tryInt(size) / 1024 / 1024,
'url': enclosure['url'].replace(' ', '_'),
'download': self.download,
'detail_url': self.getTextElement(nzb, "link"),
'description': '',
'get_more_info': self.getMoreInfo,
'extra_check': extra_check
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from NZBClub')
return results
results.append({
'id': nzbclub_id,
'name': toUnicode(self.getTextElement(nzb, "title")),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': tryInt(size) / 1024 / 1024,
'url': enclosure['url'].replace(' ', '_'),
'detail_url': self.getTextElement(nzb, "link"),
'get_more_info': self.getMoreInfo,
'extra_check': extra_check
})
def getMoreInfo(self, item):
full_description = self.getCache('nzbclub.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)

View File

@@ -1,17 +1,13 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode, \
simplifyString
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
import re
import time
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -20,18 +16,14 @@ class NzbIndex(NZBProvider, RSS):
urls = {
'download': 'https://www.nzbindex.com/download/',
'api': 'https://www.nzbindex.com/rss/',
'search': 'https://www.nzbindex.com/rss/?%s',
}
http_time_between_calls = 1 # Seconds
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
q = '"%s %s" %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'], quality.get('identifier'))
q = '"%s" %s %s' % (title, movie['library']['year'], quality.get('identifier'))
arguments = tryUrlencode({
'q': q,
'age': Env.setting('retention', 'nzb'),
@@ -43,68 +35,37 @@ class NzbIndex(NZBProvider, RSS):
'more': 1,
'complete': 1,
})
url = "%s?%s" % (self.urls['api'], arguments)
cache_key = 'nzbindex.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
nzbs = self.getRSSData(self.urls['search'] % arguments)
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
nzbindex_id = int(self.getTextElement(nzb, "link").split('/')[4])
data = self.getCache(cache_key, url)
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
nzbindex_id = int(self.getTextElement(nzb, "link").split('/')[4])
try:
description = self.getTextElement(nzb, "description")
except:
description = ''
def extra_check(new):
if '#c20000' in new['description'].lower():
log.info('Wrong: Seems to be passworded: %s', new['name'])
return False
return True
new = {
'id': nzbindex_id,
'type': 'nzb',
'provider': self.getName(),
'download': self.download,
'name': self.getTextElement(nzb, "title"),
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, "pubDate")).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': enclosure['url'].replace('/download/', '/release/'),
'description': description,
'get_more_info': self.getMoreInfo,
'extra_check': extra_check,
'check_nzb': True,
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
description = self.getTextElement(nzb, "description")
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
description = ''
return results
def extra_check(item):
if '#c20000' in item['description'].lower():
log.info('Wrong: Seems to be passworded: %s', item['name'])
return False
return True
results.append({
'id': nzbindex_id,
'name': self.getTextElement(nzb, "title"),
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, "pubDate")).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': enclosure['url'].replace('/download/', '/release/'),
'description': description,
'get_more_info': self.getMoreInfo,
'extra_check': extra_check,
})
def getMoreInfo(self, item):
try:
@@ -116,5 +77,3 @@ class NzbIndex(NZBProvider, RSS):
except:
pass
def isEnabled(self):
return NZBProvider.isEnabled(self) and self.conf('enabled')

View File

@@ -1,11 +1,9 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
import time
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -23,15 +21,9 @@ class Nzbsrus(NZBProvider, RSS):
]
cat_backup_id = 240
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
def _search(self, movie, quality, results):
cat_id_string = '&'.join(['c%s=1' % x for x in self.getCatId(quality.get('identifier'))])
arguments = tryUrlencode({
'searchtext': 'imdb:' + movie['library']['identifier'][2:],
'uid': self.conf('userid'),
@@ -42,60 +34,29 @@ class Nzbsrus(NZBProvider, RSS):
# check for english_only
if self.conf('english_only'):
arguments += "&lang0=1&lang3=1&lang1=1"
arguments += '&lang0=1&lang3=1&lang1=1'
url = "%s&%s&%s" % (self.urls['search'], arguments , cat_id_string)
url = '%s&%s&%s' % (self.urls['search'], arguments , cat_id_string)
nzbs = self.getRSSData(url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
cache_key = 'nzbsrus_1.%s.%s' % (movie['library'].get('identifier'), cat_id_string)
single_cat = True
for nzb in nzbs:
data = self.getCache(cache_key, url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'results/result')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
title = self.getTextElement(nzb, 'name')
if 'error' in title.lower(): continue
for nzb in nzbs:
nzb_id = self.getTextElement(nzb, 'id')
size = int(round(int(self.getTextElement(nzb, 'size')) / 1048576))
age = int(round((time.time() - int(self.getTextElement(nzb, 'postdate'))) / 86400))
title = self.getTextElement(nzb, "name")
if 'error' in title.lower(): continue
id = self.getTextElement(nzb, "id")
size = int(round(int(self.getTextElement(nzb, "size")) / 1048576))
age = int(round((time.time() - int(self.getTextElement(nzb, "postdate"))) / 86400))
new = {
'id': id,
'type': 'nzb',
'provider': self.getName(),
'name': title,
'age': age,
'size': size,
'url': self.urls['download'] % id + self.getApiExt() + self.getTextElement(nzb, "key"),
'download': self.download,
'detail_url': self.urls['detail'] % id,
'description': self.getTextElement(nzb, "addtext"),
'check_nzb': True,
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from Nzbsrus.com')
return results
results.append({
'id': nzb_id,
'name': title,
'age': age,
'size': size,
'url': self.urls['download'] % id + self.getApiExt() + self.getTextElement(nzb, 'key'),
'detail_url': self.urls['detail'] % nzb_id,
'description': self.getTextElement(nzb, 'addtext'),
})
def getApiExt(self):
return '/%s/' % (self.conf('userid'))

View File

@@ -0,0 +1,23 @@
from .main import Nzbx
def start():
return Nzbx()
config = [{
'name': 'nzbx',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'nzbX',
'description': 'Free provider. See <a href="https://www.nzbx.co/">nzbX</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': True,
},
],
},
],
}]

View File

@@ -0,0 +1,38 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
log = CPLog(__name__)
class Nzbx(NZBProvider):
urls = {
'search': 'https://nzbx.co/api/search?%s',
'details': 'https://nzbx.co/api/details?guid=%s',
}
http_time_between_calls = 1 # Seconds
def _search(self, movie, quality, results):
# Get nbzs
arguments = tryUrlencode({
'q': movie['library']['identifier'].replace('tt', ''),
'sf': quality.get('size_min'),
})
nzbs = self.getJsonData(self.urls['search'] % arguments, headers = {'User-Agent': Env.getIdentifier()})
for nzb in nzbs:
results.append({
'id': nzb['guid'],
'url': nzb['nzb'],
'detail_url': self.urls['details'] % nzb['guid'],
'name': nzb['name'],
'age': self.calculateAge(int(nzb['postdate'])),
'size': tryInt(nzb['size']) / 1024 / 1024,
'score': 5 if nzb['votes']['upvotes'] > nzb['votes']['downvotes'] else 0
})

View File

@@ -0,0 +1,31 @@
from .main import OMGWTFNZBs
def start():
return OMGWTFNZBs()
config = [{
'name': 'omgwtfnzbs',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'OMGWTFNZBs',
'description': 'See <a href="http://www.omgwtfnzbs.com/">OMGWTFNZBs</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
},
{
'name': 'username',
'default': '',
},
{
'name': 'api_key',
'label': 'Api Key',
'default': '',
},
],
},
],
}]

View File

@@ -0,0 +1,61 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from dateutil.parser import parse
from urlparse import urlparse, parse_qs
import time
log = CPLog(__name__)
class OMGWTFNZBs(NZBProvider, RSS):
urls = {
'search': 'http://rss.omgwtfnzbs.org/rss-search.php?%s',
}
http_time_between_calls = 1 #seconds
cat_ids = [
([15], ['dvdrip']),
([15, 16], ['brrip']),
([16], ['720p', '1080p', 'bd50']),
([17], ['dvdr']),
]
cat_backup_id = 'movie'
def search(self, movie, quality):
if quality['identifier'] in fireEvent('quality.pre_releases', single = True):
return []
return super(OMGWTFNZBs, self).search(movie, quality)
def _searchOnTitle(self, title, movie, quality, results):
q = '%s %s' % (title, movie['library']['year'])
params = tryUrlencode({
'search': q,
'catid': ','.join([str(x) for x in self.getCatId(quality['identifier'])]),
'user': self.conf('username', default = ''),
'api': self.conf('api_key', default = ''),
})
nzbs = self.getRSSData(self.urls['search'] % params)
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
results.append({
'id': parse_qs(urlparse(self.getTextElement(nzb, 'link')).query).get('id')[0],
'name': toUnicode(self.getTextElement(nzb, 'title')),
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, 'pubDate')).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': self.getTextElement(nzb, 'link'),
'description': self.getTextElement(nzb, 'description')
})

View File

@@ -24,3 +24,9 @@ class TorrentProvider(YarrProvider):
return getImdb(data) == imdbId
return False
class TorrentMagnetProvider(TorrentProvider):
type = 'torrent_magnet'
download = None

View File

@@ -1,16 +1,14 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import simplifyString
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
from couchpotato.core.providers.torrent.base import TorrentMagnetProvider
import re
import traceback
log = CPLog(__name__)
class KickAssTorrents(TorrentProvider):
class KickAssTorrents(TorrentMagnetProvider):
urls = {
'test': 'https://kat.ph/',
@@ -30,16 +28,10 @@ class KickAssTorrents(TorrentProvider):
http_time_between_calls = 1 #seconds
cat_backup_id = None
def search(self, movie, quality):
def _search(self, movie, quality, results):
results = []
if self.isDisabled():
return results
data = self.getHTMLData(self.urls['search'] % ('m', movie['library']['identifier'].replace('tt', '')))
title = simplifyString(getTitle(movie['library'])).replace(' ', '-')
cache_key = 'kickasstorrents.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, self.urls['search'] % (title, movie['library']['identifier'].replace('tt', '')))
if data:
cat_ids = self.getCatId(quality['identifier'])
@@ -53,62 +45,42 @@ class KickAssTorrents(TorrentProvider):
continue
try:
for temp in result.find_all('tr'):
if temp['class'] is 'firstr' or not temp.get('id'):
continue
try:
for temp in result.find_all('tr'):
if temp['class'] is 'firstr' or not temp.get('id'):
continue
new = {}
new = {
'type': 'torrent_magnet',
'check_nzb': False,
'description': '',
'provider': self.getName(),
'score': 0,
}
nr = 0
for td in temp.find_all('td'):
column_name = table_order[nr]
if column_name:
nr = 0
for td in temp.find_all('td'):
column_name = table_order[nr]
if column_name:
if column_name is 'name':
link = td.find('div', {'class': 'torrentname'}).find_all('a')[1]
new['id'] = temp.get('id')[-8:]
new['name'] = link.text
new['url'] = td.find('a', 'imagnet')['href']
new['detail_url'] = self.urls['detail'] % link['href'][1:]
new['score'] = 20 if td.find('a', 'iverif') else 0
elif column_name is 'size':
new['size'] = self.parseSize(td.text)
elif column_name is 'age':
new['age'] = self.ageToDays(td.text)
elif column_name is 'seeds':
new['seeders'] = tryInt(td.text)
elif column_name is 'leechers':
new['leechers'] = tryInt(td.text)
if column_name is 'name':
link = td.find('div', {'class': 'torrentname'}).find_all('a')[1]
new['id'] = temp.get('id')[-8:]
new['name'] = link.text
new['url'] = td.find('a', 'imagnet')['href']
new['detail_url'] = self.urls['detail'] % link['href'][1:]
new['score'] = 20 if td.find('a', 'iverif') else 0
elif column_name is 'size':
new['size'] = self.parseSize(td.text)
elif column_name is 'age':
new['age'] = self.ageToDays(td.text)
elif column_name is 'seeds':
new['seeds'] = tryInt(td.text)
elif column_name is 'leechers':
new['leechers'] = tryInt(td.text)
nr += 1
nr += 1
new['score'] += fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
except:
log.error('Failed parsing KickAssTorrents: %s', traceback.format_exc())
results.append(new)
except:
pass
log.error('Failed parsing KickAssTorrents: %s', traceback.format_exc())
return results
except AttributeError:
log.debug('No search results found.')
return results
def ageToDays(self, age_str):
age = 0
age_str = age_str.replace('&nbsp;', ' ')

View File

@@ -1,4 +1,3 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import getTitle, tryInt, mergeDicts
from couchpotato.core.logger import CPLog
@@ -65,18 +64,11 @@ class PassThePopcorn(TorrentProvider):
else:
raise PassThePopcorn.NotLoggedInHTTPError(req.get_full_url(), code, msg, headers, fp)
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
def _search(self, movie, quality, results):
movie_title = getTitle(movie['library'])
quality_id = quality['identifier']
log.info('Searching for %s at quality %s' % (movie_title, quality_id))
params = mergeDicts(self.quality_search_params[quality_id].copy(), {
'order_by': 'relevance',
'order_way': 'descending',
@@ -85,7 +77,7 @@ class PassThePopcorn(TorrentProvider):
# Do login for the cookies
if not self.login_opener and not self.login():
return results
return
try:
url = '%s?json=noredirect&%s' % (self.urls['torrent'], tryUrlencode(params))
@@ -93,12 +85,11 @@ class PassThePopcorn(TorrentProvider):
res = json.loads(txt)
except:
log.error('Search on PassThePopcorn.me (%s) failed (could not decode JSON)' % params)
return []
return
try:
if not 'Movies' in res:
log.info("PTP search returned nothing for '%s' at quality '%s' with search parameters %s" % (movie_title, quality_id, params))
return []
return
authkey = res['AuthKey']
passkey = res['PassKey']
@@ -118,7 +109,6 @@ class PassThePopcorn(TorrentProvider):
if 'Scene' in torrent and torrent['Scene']:
torrentdesc += ' Scene'
if 'RemasterTitle' in torrent and torrent['RemasterTitle']:
# eliminate odd characters...
torrentdesc += self.htmlToASCII(' %s' % torrent['RemasterTitle'])
torrentdesc += ' (%s)' % quality_id
@@ -127,39 +117,23 @@ class PassThePopcorn(TorrentProvider):
def extra_check(item):
return self.torrentMeetsQualitySpec(item, type)
def extra_score(item):
return 50 if torrent['GoldenPopcorn'] else 0
new = {
results.append({
'id': torrent_id,
'type': 'torrent',
'provider': self.getName(),
'name': torrent_name,
'description': '',
'url': '%s?action=download&id=%d&authkey=%s&torrent_pass=%s' % (self.urls['torrent'], torrent_id, authkey, passkey),
'detail_url': self.urls['detail'] % torrent_id,
'date': tryInt(time.mktime(parse(torrent['UploadTime']).timetuple())),
'size': tryInt(torrent['Size']) / (1024 * 1024),
'provider': self.getName(),
'seeders': tryInt(torrent['Seeders']),
'leechers': tryInt(torrent['Leechers']),
'extra_score': extra_score,
'score': 50 if torrent['GoldenPopcorn'] else 0,
'extra_check': extra_check,
'download': self.loginDownload,
}
})
new['score'] = fireEvent('score.calculate', new, movie, single = True)
if fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality):
results.append(new)
self.found(new)
return results
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def login(self):
cookieprocessor = urllib2.HTTPCookieProcessor(cookielib.CookieJar())

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'searcher',
'subtab': 'torrent_providers',
'name': 'PublicHD',
'description': 'Public Torrent site with only HD content. See <a href="https://publichd.eu/">PublicHD</a>',
'description': 'Public Torrent site with only HD content. See <a href="https://publichd.se/">PublicHD</a>',
'options': [
{
'name': 'enabled',

View File

@@ -1,9 +1,8 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode, toUnicode
from couchpotato.core.helpers.variable import getTitle, tryInt
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
from couchpotato.core.providers.torrent.base import TorrentMagnetProvider
from urlparse import parse_qs
import re
import traceback
@@ -11,31 +10,31 @@ import traceback
log = CPLog(__name__)
class PublicHD(TorrentProvider):
class PublicHD(TorrentMagnetProvider):
urls = {
'test': 'https://publichd.eu',
'detail': 'https://publichd.eu/index.php?page=torrent-details&id=%s',
'search': 'https://publichd.eu/index.php',
'test': 'https://publichd.se',
'detail': 'https://publichd.se/index.php?page=torrent-details&id=%s',
'search': 'https://publichd.se/index.php',
}
http_time_between_calls = 0
def search(self, movie, quality):
results = []
if not quality.get('hd', False):
return []
if self.isDisabled() or not quality.get('hd', False):
return results
return super(PublicHD, self).search(movie, quality)
def _searchOnTitle(self, title, movie, quality, results):
params = tryUrlencode({
'page':'torrents',
'search': '%s %s' % (getTitle(movie['library']), movie['library']['year']),
'search': '%s %s' % (title, movie['library']['year']),
'active': 1,
})
url = '%s?%s' % (self.urls['search'], params)
cache_key = 'publichd.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, url)
data = self.getHTMLData('%s?%s' % (self.urls['search'], params))
if data:
@@ -53,36 +52,20 @@ class PublicHD(TorrentProvider):
url = parse_qs(info_url['href'])
new = {
results.append({
'id': url['id'][0],
'name': info_url.string,
'type': 'torrent_magnet',
'check_nzb': False,
'description': '',
'provider': self.getName(),
'url': download['href'],
'detail_url': self.urls['detail'] % url['id'][0],
'size': self.parseSize(result.find_all('td')[7].string),
'seeders': tryInt(result.find_all('td')[4].string),
'leechers': tryInt(result.find_all('td')[5].string),
'get_more_info': self.getMoreInfo
}
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def getMoreInfo(self, item):
full_description = self.getCache('publichd.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)
html = BeautifulSoup(full_description)

View File

@@ -1,5 +1,4 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode, toUnicode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
@@ -27,30 +26,24 @@ class SceneAccess(TorrentProvider):
http_time_between_calls = 1 #seconds
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
def _search(self, movie, quality, results):
url = self.urls['search'] % (
self.getCatId(quality['identifier'])[0],
self.getCatId(quality['identifier'])[0]
)
q = '%s %s' % (movie['library']['identifier'], quality.get('identifier'))
arguments = tryUrlencode({
'search': q,
'search': movie['library']['identifier'],
'method': 1,
})
url = "%s&%s" % (url, arguments)
# Do login for the cookies
if not self.login_opener and not self.login():
return results
return
cache_key = 'sceneaccess.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, url, opener = self.login_opener)
data = self.getHTMLData(url, opener = self.login_opener)
if data:
html = BeautifulSoup(data)
@@ -58,7 +51,7 @@ class SceneAccess(TorrentProvider):
try:
resultsTable = html.find('table', attrs = {'id' : 'torrents-table'})
if resultsTable is None:
return results
return
entries = resultsTable.find_all('tr', attrs = {'class' : 'tt_row'})
for result in entries:
@@ -66,38 +59,23 @@ class SceneAccess(TorrentProvider):
link = result.find('td', attrs = {'class' : 'ttr_name'}).find('a')
url = result.find('td', attrs = {'class' : 'td_dl'}).find('a')
leechers = result.find('td', attrs = {'class' : 'ttr_leechers'}).find('a')
id = link['href'].replace('details?id=', '')
torrent_id = link['href'].replace('details?id=', '')
new = {
'id': id,
'type': 'torrent',
'check_nzb': False,
'description': '',
'provider': self.getName(),
results.append({
'id': torrent_id,
'name': link['title'],
'url': self.urls['download'] % url['href'],
'detail_url': self.urls['detail'] % id,
'detail_url': self.urls['detail'] % torrent_id,
'size': self.parseSize(result.find('td', attrs = {'class' : 'ttr_size'}).contents[0]),
'seeders': tryInt(result.find('td', attrs = {'class' : 'ttr_seeders'}).find('a').string),
'leechers': tryInt(leechers.string) if leechers else 0,
'download': self.loginDownload,
'get_more_info': self.getMoreInfo,
}
})
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def getLoginParams(self):
return tryUrlencode({
'username': self.conf('username'),

View File

@@ -1,7 +1,6 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import simplifyString, tryUrlencode
from couchpotato.core.helpers.variable import getTitle, tryInt
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
import traceback
@@ -21,13 +20,9 @@ class SceneHD(TorrentProvider):
http_time_between_calls = 1 #seconds
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
q = '"%s %s" %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'], quality.get('identifier'))
q = '"%s %s" %s' % (simplifyString(title), movie['library']['year'], quality.get('identifier'))
arguments = tryUrlencode({
'search': q,
})
@@ -35,10 +30,9 @@ class SceneHD(TorrentProvider):
# Cookie login
if not self.login_opener and not self.login():
return results
return
cache_key = 'scenehd.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, url, opener = self.login_opener)
data = self.getHTMLData(url, opener = self.login_opener)
if data:
html = BeautifulSoup(data)
@@ -52,7 +46,7 @@ class SceneHD(TorrentProvider):
detail_link = all_cells[2].find('a')
details = detail_link['href']
id = details.replace('details.php?id=', '')
torrent_id = details.replace('details.php?id=', '')
leechers = all_cells[11].find('a')
if leechers:
@@ -60,38 +54,20 @@ class SceneHD(TorrentProvider):
else:
leechers = all_cells[11].string
new = {
'id': id,
results.append({
'id': torrent_id,
'name': detail_link['title'],
'type': 'torrent',
'check_nzb': False,
'description': '',
'provider': self.getName(),
'size': self.parseSize(all_cells[7].string),
'seeders': tryInt(all_cells[10].find('a').string),
'leechers': tryInt(leechers),
'url': self.urls['download'] % id,
'url': self.urls['download'] % torrent_id,
'download': self.loginDownload,
}
imdb_link = all_cells[1].find('a')
imdb_results = self.imdbMatch(imdb_link['href'], movie['library']['identifier']) if imdb_link else False
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = imdb_results, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
'description': all_cells[1].find('a')['href'],
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def getLoginParams(self, params):
return tryUrlencode({

View File

@@ -1,11 +1,9 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import getTitle, tryInt, cleanHost
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.variable import tryInt, cleanHost
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
from couchpotato.core.providers.torrent.base import TorrentMagnetProvider
from couchpotato.environment import Env
from urllib import quote_plus
import re
import time
import traceback
@@ -13,7 +11,7 @@ import traceback
log = CPLog(__name__)
class ThePirateBay(TorrentProvider):
class ThePirateBay(TorrentMagnetProvider):
urls = {
'detail': '%s/torrent/%s',
@@ -45,6 +43,58 @@ class ThePirateBay(TorrentProvider):
self.domain = self.conf('domain')
super(ThePirateBay, self).__init__()
def _searchOnTitle(self, title, movie, quality, results):
search_url = self.urls['search'] % (self.getDomain(), tryUrlencode(title + ' ' + quality['identifier']), self.getCatId(quality['identifier'])[0])
data = self.getHTMLData(search_url)
if data:
try:
soup = BeautifulSoup(data)
results_table = soup.find('table', attrs = {'id': 'searchResult'})
if not results_table:
return
entries = results_table.find_all('tr')
for result in entries[2:]:
link = result.find(href = re.compile('torrent\/\d+\/'))
download = result.find(href = re.compile('magnet:'))
try:
size = re.search('Size (?P<size>.+),', unicode(result.select('font.detDesc')[0])).group('size')
except:
continue
if link and download:
def extra_score(item):
trusted = (0, 10)[result.find('img', alt = re.compile('Trusted')) != None]
vip = (0, 20)[result.find('img', alt = re.compile('VIP')) != None]
confirmed = (0, 30)[result.find('img', alt = re.compile('Helpers')) != None]
moderated = (0, 50)[result.find('img', alt = re.compile('Moderator')) != None]
return confirmed + trusted + vip + moderated
results.append({
'id': re.search('/(?P<id>\d+)/', link['href']).group('id'),
'name': link.string,
'url': download['href'],
'detail_url': self.getDomain(link['href']),
'size': self.parseSize(size),
'seeders': tryInt(result.find_all('td')[2].string),
'leechers': tryInt(result.find_all('td')[3].string),
'extra_score': extra_score,
'get_more_info': self.getMoreInfo
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
def isEnabled(self):
return super(ThePirateBay, self).isEnabled() and self.getDomain()
def getDomain(self, url = ''):
if not self.domain:
@@ -74,74 +124,6 @@ class ThePirateBay(TorrentProvider):
return cleanHost(self.domain).rstrip('/') + url
def search(self, movie, quality):
results = []
if self.isDisabled() or not self.getDomain():
return results
cache_key = 'thepiratebay.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
search_url = self.urls['search'] % (self.getDomain(), quote_plus(getTitle(movie['library']) + ' ' + quality['identifier']), self.getCatId(quality['identifier'])[0])
data = self.getCache(cache_key, search_url)
if data:
try:
soup = BeautifulSoup(data)
results_table = soup.find('table', attrs = {'id': 'searchResult'})
if not results_table:
return results
entries = results_table.find_all('tr')
for result in entries[2:]:
link = result.find(href = re.compile('torrent\/\d+\/'))
download = result.find(href = re.compile('magnet:'))
try:
size = re.search('Size (?P<size>.+),', unicode(result.select('font.detDesc')[0])).group('size')
except:
continue
if link and download:
def extra_score(item):
trusted = (0, 10)[result.find('img', alt = re.compile('Trusted')) != None]
vip = (0, 20)[result.find('img', alt = re.compile('VIP')) != None]
confirmed = (0, 30)[result.find('img', alt = re.compile('Helpers')) != None]
moderated = (0, 50)[result.find('img', alt = re.compile('Moderator')) != None]
return confirmed + trusted + vip + moderated
new = {
'id': re.search('/(?P<id>\d+)/', link['href']).group('id'),
'type': 'torrent_magnet',
'name': link.string,
'check_nzb': False,
'description': '',
'provider': self.getName(),
'url': download['href'],
'detail_url': self.getDomain(link['href']),
'size': self.parseSize(size),
'seeders': tryInt(result.find_all('td')[2].string),
'leechers': tryInt(result.find_all('td')[3].string),
'extra_score': extra_score,
'get_more_info': self.getMoreInfo
}
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def getMoreInfo(self, item):
full_description = self.getCache('tpb.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)
html = BeautifulSoup(full_description)

View File

@@ -0,0 +1,32 @@
from .main import TorrentDay
def start():
return TorrentDay()
config = [{
'name': 'torrentday',
'groups': [
{
'tab': 'searcher',
'subtab': 'torrent_providers',
'name': 'TorrentDay',
'description': 'See <a href="http://www.td.af/">TorrentDay</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
],
},
],
}]

View File

@@ -0,0 +1,61 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class TorrentDay(TorrentProvider):
urls = {
'test': 'http://www.td.af/',
'login' : 'http://www.td.af/torrents/',
'detail': 'http://www.td.af/details.php?id=%s',
'search': 'http://www.td.af/V3/API/API.php',
'download': 'http://www.td.af/download.php/%s/%s',
}
cat_ids = [
([11], ['720p', '1080p']),
([1, 21, 25], ['cam', 'ts', 'dvdrip', 'tc', 'r5', 'scr', 'brrip']),
([3], ['dvdr']),
([5], ['bd50']),
]
http_time_between_calls = 1 #seconds
def _searchOnTitle(self, title, movie, quality, results):
q = '"%s %s"' % (title, movie['library']['year'])
params = {
'/browse.php?': None,
'cata': 'yes',
'jxt': 8,
'jxw': 'b',
'search': q,
}
data = self.getJsonData(self.urls['search'], params = params, opener = self.login_opener)
try: torrents = data.get('Fs', [])[0].get('Cn', {}).get('torrents', [])
except: return
for torrent in torrents:
results.append({
'id': torrent['id'],
'name': torrent['name'],
'url': self.urls['download'] % (torrent['id'], torrent['fname']),
'detail_url': self.urls['detail'] % torrent['id'],
'size': self.parseSize(torrent.get('size')),
'seeders': tryInt(torrent.get('seed')),
'leechers': tryInt(torrent.get('leech')),
'download': self.loginDownload,
})
def getLoginParams(self):
return tryUrlencode({
'username': self.conf('username'),
'password': self.conf('password'),
'submit': 'submit',
})

View File

@@ -1,10 +1,8 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import getTitle, tryInt
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
from urllib import quote_plus
import traceback
@@ -14,11 +12,11 @@ log = CPLog(__name__)
class TorrentLeech(TorrentProvider):
urls = {
'test' : 'http://torrentleech.org/',
'login' : 'http://torrentleech.org/user/account/login/',
'detail' : 'http://torrentleech.org/torrent/%s',
'search' : 'http://torrentleech.org/torrents/browse/index/query/%s/categories/%d',
'download' : 'http://torrentleech.org%s',
'test' : 'http://www.torrentleech.org/',
'login' : 'http://www.torrentleech.org/user/account/login/',
'detail' : 'http://www.torrentleech.org/torrent/%s',
'search' : 'http://www.torrentleech.org/torrents/browse/index/query/%s/categories/%d',
'download' : 'http://www.torrentleech.org%s',
}
cat_ids = [
@@ -32,20 +30,12 @@ class TorrentLeech(TorrentProvider):
]
http_time_between_calls = 1 #seconds
cat_backup_id = None
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
# Cookie login
if not self.login_opener and not self.login():
return results
cache_key = 'torrentleech.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
url = self.urls['search'] % (quote_plus(getTitle(movie['library']).replace(':', '') + ' ' + quality['identifier']), self.getCatId(quality['identifier'])[0])
data = self.getCache(cache_key, url, opener = self.login_opener)
url = self.urls['search'] % (tryUrlencode(title.replace(':', '') + ' ' + quality['identifier']), self.getCatId(quality['identifier'])[0])
data = self.getHTMLData(url, opener = self.login_opener)
if data:
html = BeautifulSoup(data)
@@ -53,7 +43,7 @@ class TorrentLeech(TorrentProvider):
try:
result_table = html.find('table', attrs = {'id' : 'torrenttable'})
if not result_table:
return results
return
entries = result_table.find_all('tr')
@@ -61,37 +51,22 @@ class TorrentLeech(TorrentProvider):
link = result.find('td', attrs = {'class' : 'name'}).find('a')
url = result.find('td', attrs = {'class' : 'quickdownload'}).find('a')
details = result.find('td', attrs = {'class' : 'name'}).find('a')
new = {
results.append({
'id': link['href'].replace('/torrent/', ''),
'name': link.string,
'type': 'torrent',
'check_nzb': False,
'description': '',
'provider': self.getName(),
'url': self.urls['download'] % url['href'],
'detail_url': self.urls['download'] % details['href'],
'download': self.loginDownload,
'size': self.parseSize(result.find_all('td')[4].string),
'seeders': tryInt(result.find('td', attrs = {'class' : 'seeders'}).string),
'leechers': tryInt(result.find('td', attrs = {'class' : 'leechers'}).string),
}
})
imdb_results = self.imdbMatch(self.urls['detail'] % new['id'], movie['library']['identifier'])
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = imdb_results, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return []
def getLoginParams(self):
return tryUrlencode({
'username': self.conf('username'),

View File

@@ -100,7 +100,7 @@ class Release(Entity):
movie = ManyToOne('Movie')
status = ManyToOne('Status')
quality = ManyToOne('Quality')
files = ManyToMany('File', cascade = 'all, delete-orphan', single_parent = True)
files = ManyToMany('File')
info = OneToMany('ReleaseInfo', cascade = 'all, delete-orphan')
def to_dict(self, deep = {}, exclude = []):

View File

@@ -4,7 +4,6 @@ from couchpotato.api import api, NonBlockHandler
from couchpotato.core.event import fireEventAsync, fireEvent
from couchpotato.core.helpers.variable import getDataDir, tryInt
from logging import handlers
from tornado.ioloop import IOLoop
from tornado.web import Application, FallbackHandler
from tornado.wsgi import WSGIContainer
from werkzeug.contrib.cache import FileSystemCache
@@ -189,7 +188,7 @@ def runCouchPotato(options, base_path, args, data_dir = None, log_dir = None, En
version_control(db, repo, version = latest_db_version)
current_db_version = db_version(db, repo)
if current_db_version < latest_db_version and not debug:
if current_db_version < latest_db_version and not development:
log.info('Doing database upgrade. From %d to %d', (current_db_version, latest_db_version))
upgrade(db, repo)
@@ -231,6 +230,7 @@ def runCouchPotato(options, base_path, args, data_dir = None, log_dir = None, En
fireEventAsync('app.load')
# Go go go!
from tornado.ioloop import IOLoop
web_container = WSGIContainer(app)
web_container._log = _log
loop = IOLoop.instance()

View File

@@ -13,7 +13,7 @@ var ApiClass = new Class({
return new Request[r_type](Object.merge({
'callbackKey': 'callback_func',
'method': 'get',
'url': self.createUrl(type),
'url': self.createUrl(type, {'t': randomString()}),
}, options)).send()
},

View File

@@ -1,89 +1,49 @@
#!/bin/sh
#
# PROVIDE: couchpotato
# REQUIRE: sabnzbd
# REQUIRE: DAEMON
# KEYWORD: shutdown
#
# Add the following lines to /etc/rc.conf.local or /etc/rc.conf
# to enable this service:
#
# couchpotato_enable (bool): Set to NO by default.
# Set it to YES to enable it.
# couchpotato_user: The user account CouchPotato daemon runs as what
# you want it to be. It uses '_sabnzbd' user by
# default. Do not sets it as empty or it will run
# as root.
# couchpotato_dir: Directory where CouchPotato lives.
# Default: /usr/local/couchpotato
# couchpotato_chdir: Change to this directory before running CouchPotato.
# Default is same as couchpotato_dir.
# couchpotato_pid: The name of the pidfile to create.
# Default is couchpotato.pid in couchpotato_dir.
# Add the following lines to /etc/rc.conf to enable couchpotato:
# couchpotato_enable: Set to NO by default. Set it to YES to enable it.
# couchpotato_user: The user account CouchPotato daemon runs as what
# you want it to be.
# couchpotato_dir: Directory where CouchPotato lives.
# Default: /usr/local/CouchPotatoServer
# couchpotato_datadir: Directory where CouchPotato user data lives.
# Default: $couchpotato_dir/data
# couchpotato_conf: Directory where CouchPotato user data lives.
# Default: $couchpotato_datadir/settings.conf
# couchpotato_pid: Full path to PID file.
# Default: $couchpotato_datadir/couchpotato.pid
# couchpotato_flags: Set additonal flags as needed.
. /etc/rc.subr
name="couchpotato"
rcvar=${name}_enable
rcvar=couchpotato_enable
load_rc_config ${name}
: ${couchpotato_enable:="NO"}
: ${couchpotato_user:="_sabnzbd"}
: ${couchpotato_dir:="/usr/local/couchpotato"}
: ${couchpotato_chdir:="${couchpotato_dir}"}
: ${couchpotato_pid:="${couchpotato_dir}/couchpotato.pid"}
: ${couchpotato_conf:="${couchpotato_dir}/data/settings.conf"}
: ${couchpotato_enable:=NO}
: ${couchpotato_user:=} #default is root
: ${couchpotato_dir:=/usr/local/CouchPotatoServer}
: ${couchpotato_datadir:=${couchpotato_dir}/data}
: ${couchpotato_conf:=} #default is datadir/settings.conf
: ${couchpotato_pid:=} #default is datadir/couchpotato.pid
: ${couchpotato_flags:=}
WGET="/usr/local/bin/wget" # You need wget for this script to safely shutdown CouchPotato.
if [ -e "${couchpotato_conf}" ]; then
HOST=`grep -A14 "\[core\]" "${couchpotato_conf}"|egrep "^host"|perl -wple 's/^host = (.*)$/$1/'`
PORT=`grep -A14 "\[core\]" "${couchpotato_conf}"|egrep "^port"|perl -wple 's/^port = (.*)$/$1/'`
CPAPI=`grep -A14 "\[core\]" "${couchpotato_conf}"|egrep "^api_key"|perl -wple 's/^api_key = (.*)$/$1/'`
command="${couchpotato_dir}/CouchPotato.py"
command_interpreter="/usr/local/bin/python"
command_args="--daemon --data_dir ${couchpotato_datadir}"
# append optional flags
if [ -n "${couchpotato_pid}" ]; then
pidfile=${couchpotato_pid}
couchpotato_flags="${couchpotato_flags} --pid_file ${couchpotato_pid}"
fi
status_cmd="${name}_status"
stop_cmd="${name}_stop"
command="/usr/sbin/daemon"
command_args="-f -p ${couchpotato_pid} python ${couchpotato_dir}/CouchPotato.py ${couchpotato_flags}"
# Check for wget and refuse to start without it.
if [ ! -x "${WGET}" ]; then
warn "couchpotato not started: You need wget to safely shut down CouchPotato."
exit 1
if [ -n "${couchpotato_conf}" ]; then
couchpotato_flags="${couchpotato_flags} --config_file ${couchpotato_conf}"
fi
# Ensure user is root when running this script.
if [ `id -u` != "0" ]; then
echo "Oops, you should be root before running this!"
exit 1
fi
verify_couchpotato_pid() {
# Make sure the pid corresponds to the CouchPotato process.
pid=`cat ${couchpotato_pid} 2>/dev/null`
ps -p ${pid} | grep -q "python ${couchpotato_dir}/CouchPotato.py"
return $?
}
# Try to stop CouchPotato cleanly by calling shutdown over http.
couchpotato_stop() {
if [ ! -e "${couchpotato_conf}" ]; then
echo "CouchPotato's settings file does not exist. Try starting CouchPotato, as this should create the file."
exit 1
fi
echo "Stopping $name"
verify_couchpotato_pid
${WGET} -O - -q "http://${HOST}:${PORT}/api/${CPAPI}/app.shutdown/" >/dev/null
if [ -n "${pid}" ]; then
wait_for_pids ${pid}
echo "Stopped"
fi
}
couchpotato_status() {
verify_couchpotato_pid && echo "$name is running as ${pid}" || echo "$name is not running"
}
run_rc_command "$1"

View File

@@ -1,5 +1,5 @@
#define MyAppName "CouchPotato"
#define MyAppVer "2.0.3"
#define MyAppVer "2.0.5"
[Setup]
AppName={#MyAppName}

View File

@@ -1,9 +1,10 @@
# -*- coding: utf-8 -*-
"""
Copyright (c) 2003-2010 Gustavo Niemeyer <gustavo@niemeyer.net>
This module offers extensions to the standard python 2.3+
This module offers extensions to the standard Python
datetime module.
"""
__author__ = "Gustavo Niemeyer <gustavo@niemeyer.net>"
__license__ = "PSF License"
__version__ = "1.5"
__author__ = "Tomi Pieviläinen <tomi.pievilainen@iki.fi>"
__license__ = "Simplified BSD"
__version__ = "2.1"

View File

@@ -1,11 +1,10 @@
"""
Copyright (c) 2003-2007 Gustavo Niemeyer <gustavo@niemeyer.net>
This module offers extensions to the standard python 2.3+
This module offers extensions to the standard Python
datetime module.
"""
__author__ = "Gustavo Niemeyer <gustavo@niemeyer.net>"
__license__ = "PSF License"
__license__ = "Simplified BSD"
import datetime
@@ -52,7 +51,7 @@ def easter(year, method=EASTER_WESTERN):
"""
if not (1 <= method <= 3):
raise ValueError, "invalid method"
raise ValueError("invalid method")
# g - Golden year - 1
# c - Century
@@ -88,5 +87,5 @@ def easter(year, method=EASTER_WESTERN):
p = i-j+e
d = 1+(p+27+(p+6)//40)%31
m = 3+(p+26)//30
return datetime.date(int(y),int(m),int(d))
return datetime.date(int(y), int(m), int(d))

View File

@@ -2,25 +2,27 @@
"""
Copyright (c) 2003-2007 Gustavo Niemeyer <gustavo@niemeyer.net>
This module offers extensions to the standard python 2.3+
This module offers extensions to the standard Python
datetime module.
"""
__author__ = "Gustavo Niemeyer <gustavo@niemeyer.net>"
__license__ = "PSF License"
from __future__ import unicode_literals
__license__ = "Simplified BSD"
import datetime
import string
import time
import sys
import os
import collections
try:
from cStringIO import StringIO
from io import StringIO
except ImportError:
from StringIO import StringIO
from io import StringIO
import relativedelta
import tz
from six import text_type, binary_type, integer_types
from . import relativedelta
from . import tz
__all__ = ["parse", "parserinfo"]
@@ -39,7 +41,7 @@ __all__ = ["parse", "parserinfo"]
class _timelex(object):
def __init__(self, instream):
if isinstance(instream, basestring):
if isinstance(instream, text_type):
instream = StringIO(instream)
self.instream = instream
self.wordchars = ('abcdfeghijklmnopqrstuvwxyz'
@@ -133,12 +135,15 @@ class _timelex(object):
def __iter__(self):
return self
def next(self):
def __next__(self):
token = self.get_token()
if token is None:
raise StopIteration
return token
def next(self):
return self.__next__() # Python 2.x support
def split(cls, s):
return list(cls(s))
split = classmethod(split)
@@ -155,7 +160,7 @@ class _resultbase(object):
for attr in self.__slots__:
value = getattr(self, attr)
if value is not None:
l.append("%s=%s" % (attr, `value`))
l.append("%s=%s" % (attr, repr(value)))
return "%s(%s)" % (classname, ", ".join(l))
def __repr__(self):
@@ -167,7 +172,7 @@ class parserinfo(object):
# m from a.m/p.m, t from ISO T separator
JUMP = [" ", ".", ",", ";", "-", "/", "'",
"at", "on", "and", "ad", "m", "t", "of",
"st", "nd", "rd", "th"]
"st", "nd", "rd", "th"]
WEEKDAYS = [("Mon", "Monday"),
("Tue", "Tuesday"),
@@ -176,7 +181,7 @@ class parserinfo(object):
("Fri", "Friday"),
("Sat", "Saturday"),
("Sun", "Sunday")]
MONTHS = [("Jan", "January"),
MONTHS = [("Jan", "January"),
("Feb", "February"),
("Mar", "March"),
("Apr", "April"),
@@ -184,7 +189,7 @@ class parserinfo(object):
("Jun", "June"),
("Jul", "July"),
("Aug", "August"),
("Sep", "September"),
("Sep", "Sept", "September"),
("Oct", "October"),
("Nov", "November"),
("Dec", "December")]
@@ -197,7 +202,7 @@ class parserinfo(object):
PERTAIN = ["of"]
TZOFFSET = {}
def __init__(self, dayfirst=False, yearfirst=False):
def __init__(self, dayfirst = False, yearfirst = False):
self._jump = self._convert(self.JUMP)
self._weekdays = self._convert(self.WEEKDAYS)
self._months = self._convert(self.MONTHS)
@@ -210,7 +215,7 @@ class parserinfo(object):
self.yearfirst = yearfirst
self._year = time.localtime().tm_year
self._century = self._year//100*100
self._century = self._year // 100 * 100
def _convert(self, lst):
dct = {}
@@ -237,7 +242,7 @@ class parserinfo(object):
def month(self, name):
if len(name) >= 3:
try:
return self._months[name.lower()]+1
return self._months[name.lower()] + 1
except KeyError:
pass
return None
@@ -268,7 +273,7 @@ class parserinfo(object):
def convertyear(self, year):
if year < 100:
year += self._century
if abs(year-self._year) >= 50:
if abs(year - self._year) >= 50:
if year < self._year:
year += 100
else:
@@ -289,18 +294,18 @@ class parserinfo(object):
class parser(object):
def __init__(self, info=None):
def __init__(self, info = None):
self.info = info or parserinfo()
def parse(self, timestr, default=None,
ignoretz=False, tzinfos=None,
def parse(self, timestr, default = None,
ignoretz = False, tzinfos = None,
**kwargs):
if not default:
default = datetime.datetime.now().replace(hour=0, minute=0,
second=0, microsecond=0)
default = datetime.datetime.now().replace(hour = 0, minute = 0,
second = 0, microsecond = 0)
res = self._parse(timestr, **kwargs)
if res is None:
raise ValueError, "unknown string format"
raise ValueError("unknown string format")
repl = {}
for attr in ["year", "month", "day", "hour",
"minute", "second", "microsecond"]:
@@ -309,29 +314,29 @@ class parser(object):
repl[attr] = value
ret = default.replace(**repl)
if res.weekday is not None and not res.day:
ret = ret+relativedelta.relativedelta(weekday=res.weekday)
ret = ret + relativedelta.relativedelta(weekday = res.weekday)
if not ignoretz:
if callable(tzinfos) or tzinfos and res.tzname in tzinfos:
if callable(tzinfos):
if isinstance(tzinfos, collections.Callable) or tzinfos and res.tzname in tzinfos:
if isinstance(tzinfos, collections.Callable):
tzdata = tzinfos(res.tzname, res.tzoffset)
else:
tzdata = tzinfos.get(res.tzname)
if isinstance(tzdata, datetime.tzinfo):
tzinfo = tzdata
elif isinstance(tzdata, basestring):
elif isinstance(tzdata, text_type):
tzinfo = tz.tzstr(tzdata)
elif isinstance(tzdata, int):
elif isinstance(tzdata, integer_types):
tzinfo = tz.tzoffset(res.tzname, tzdata)
else:
raise ValueError, "offset must be tzinfo subclass, " \
"tz string, or int offset"
ret = ret.replace(tzinfo=tzinfo)
raise ValueError("offset must be tzinfo subclass, " \
"tz string, or int offset")
ret = ret.replace(tzinfo = tzinfo)
elif res.tzname and res.tzname in time.tzname:
ret = ret.replace(tzinfo=tz.tzlocal())
ret = ret.replace(tzinfo = tz.tzlocal())
elif res.tzoffset == 0:
ret = ret.replace(tzinfo=tz.tzutc())
ret = ret.replace(tzinfo = tz.tzutc())
elif res.tzoffset:
ret = ret.replace(tzinfo=tz.tzoffset(res.tzname, res.tzoffset))
ret = ret.replace(tzinfo = tz.tzoffset(res.tzname, res.tzoffset))
return ret
class _result(_resultbase):
@@ -339,7 +344,7 @@ class parser(object):
"hour", "minute", "second", "microsecond",
"tzname", "tzoffset"]
def _parse(self, timestr, dayfirst=None, yearfirst=None, fuzzy=False):
def _parse(self, timestr, dayfirst = None, yearfirst = None, fuzzy = False):
info = self.info
if dayfirst is None:
dayfirst = info.dayfirst
@@ -374,14 +379,14 @@ class parser(object):
and (i >= len_l or (l[i] != ':' and
info.hms(l[i]) is None))):
# 19990101T23[59]
s = l[i-1]
s = l[i - 1]
res.hour = int(s[:2])
if len_li == 4:
res.minute = int(s[2:])
elif len_li == 6 or (len_li > 6 and l[i-1].find('.') == 6):
elif len_li == 6 or (len_li > 6 and l[i - 1].find('.') == 6):
# YYMMDD or HHMMSS[.ss]
s = l[i-1]
if not ymd and l[i-1].find('.') == -1:
s = l[i - 1]
if not ymd and l[i - 1].find('.') == -1:
ymd.append(info.convertyear(int(s[:2])))
ymd.append(int(s[2:4]))
ymd.append(int(s[4:]))
@@ -392,13 +397,13 @@ class parser(object):
res.second, res.microsecond = _parsems(s[4:])
elif len_li == 8:
# YYYYMMDD
s = l[i-1]
s = l[i - 1]
ymd.append(int(s[:4]))
ymd.append(int(s[4:6]))
ymd.append(int(s[6:]))
elif len_li in (12, 14):
# YYYYMMDDhhmm[ss]
s = l[i-1]
s = l[i - 1]
ymd.append(int(s[:4]))
ymd.append(int(s[4:6]))
ymd.append(int(s[6:8]))
@@ -407,8 +412,8 @@ class parser(object):
if len_li == 14:
res.second = int(s[12:])
elif ((i < len_l and info.hms(l[i]) is not None) or
(i+1 < len_l and l[i] == ' ' and
info.hms(l[i+1]) is not None)):
(i + 1 < len_l and l[i] == ' ' and
info.hms(l[i + 1]) is not None)):
# HH[ ]h or MM[ ]m or SS[.ss][ ]s
if l[i] == ' ':
i += 1
@@ -416,12 +421,12 @@ class parser(object):
while True:
if idx == 0:
res.hour = int(value)
if value%1:
res.minute = int(60*(value%1))
if value % 1:
res.minute = int(60 * (value % 1))
elif idx == 1:
res.minute = int(value)
if value%1:
res.second = int(60*(value%1))
if value % 1:
res.second = int(60 * (value % 1))
elif idx == 2:
res.second, res.microsecond = \
_parsems(value_repr)
@@ -441,17 +446,28 @@ class parser(object):
newidx = info.hms(l[i])
if newidx is not None:
idx = newidx
elif i+1 < len_l and l[i] == ':':
elif i == len_l and l[i - 2] == ' ' and info.hms(l[i - 3]) is not None:
# X h MM or X m SS
idx = info.hms(l[i - 3]) + 1
if idx == 1:
res.minute = int(value)
if value % 1:
res.second = int(60 * (value % 1))
elif idx == 2:
res.second, res.microsecond = \
_parsems(value_repr)
i += 1
elif i + 1 < len_l and l[i] == ':':
# HH:MM[:SS[.ss]]
res.hour = int(value)
i += 1
value = float(l[i])
res.minute = int(value)
if value%1:
res.second = int(60*(value%1))
if value % 1:
res.second = int(60 * (value % 1))
i += 1
if i < len_l and l[i] == ':':
res.second, res.microsecond = _parsems(l[i+1])
res.second, res.microsecond = _parsems(l[i + 1])
i += 2
elif i < len_l and l[i] in ('-', '/', '.'):
sep = l[i]
@@ -467,7 +483,7 @@ class parser(object):
if value is not None:
ymd.append(value)
assert mstridx == -1
mstridx = len(ymd)-1
mstridx = len(ymd) - 1
else:
return None
i += 1
@@ -477,18 +493,18 @@ class parser(object):
value = info.month(l[i])
if value is not None:
ymd.append(value)
mstridx = len(ymd)-1
mstridx = len(ymd) - 1
assert mstridx == -1
else:
ymd.append(int(l[i]))
i += 1
elif i >= len_l or info.jump(l[i]):
if i+1 < len_l and info.ampm(l[i+1]) is not None:
if i + 1 < len_l and info.ampm(l[i + 1]) is not None:
# 12 am
res.hour = int(value)
if res.hour < 12 and info.ampm(l[i+1]) == 1:
if res.hour < 12 and info.ampm(l[i + 1]) == 1:
res.hour += 12
elif res.hour == 12 and info.ampm(l[i+1]) == 0:
elif res.hour == 12 and info.ampm(l[i + 1]) == 0:
res.hour = 0
i += 1
else:
@@ -521,7 +537,7 @@ class parser(object):
if value is not None:
ymd.append(value)
assert mstridx == -1
mstridx = len(ymd)-1
mstridx = len(ymd) - 1
i += 1
if i < len_l:
if l[i] in ('-', '/'):
@@ -535,12 +551,12 @@ class parser(object):
i += 1
ymd.append(int(l[i]))
i += 1
elif (i+3 < len_l and l[i] == l[i+2] == ' '
and info.pertain(l[i+1])):
elif (i + 3 < len_l and l[i] == l[i + 2] == ' '
and info.pertain(l[i + 1])):
# Jan of 01
# In this case, 01 is clearly year
try:
value = int(l[i+3])
value = int(l[i + 3])
except ValueError:
# Wrong guess
pass
@@ -585,32 +601,32 @@ class parser(object):
# Check for a numbered timezone
if res.hour is not None and l[i] in ('+', '-'):
signal = (-1,1)[l[i] == '+']
signal = (-1, 1)[l[i] == '+']
i += 1
len_li = len(l[i])
if len_li == 4:
# -0300
res.tzoffset = int(l[i][:2])*3600+int(l[i][2:])*60
elif i+1 < len_l and l[i+1] == ':':
res.tzoffset = int(l[i][:2]) * 3600 + int(l[i][2:]) * 60
elif i + 1 < len_l and l[i + 1] == ':':
# -03:00
res.tzoffset = int(l[i])*3600+int(l[i+2])*60
res.tzoffset = int(l[i]) * 3600 + int(l[i + 2]) * 60
i += 2
elif len_li <= 2:
# -[0]3
res.tzoffset = int(l[i][:2])*3600
res.tzoffset = int(l[i][:2]) * 3600
else:
return None
i += 1
res.tzoffset *= signal
# Look for a timezone name between parenthesis
if (i+3 < len_l and
info.jump(l[i]) and l[i+1] == '(' and l[i+3] == ')' and
3 <= len(l[i+2]) <= 5 and
not [x for x in l[i+2]
if (i + 3 < len_l and
info.jump(l[i]) and l[i + 1] == '(' and l[i + 3] == ')' and
3 <= len(l[i + 2]) <= 5 and
not [x for x in l[i + 2]
if x not in string.ascii_uppercase]):
# -0300 (BRST)
res.tzname = l[i+2]
res.tzname = l[i + 2]
i += 4
continue
@@ -690,7 +706,12 @@ class parser(object):
return res
DEFAULTPARSER = parser()
def parse(timestr, parserinfo=None, **kwargs):
def parse(timestr, parserinfo = None, **kwargs):
# Python 2.x support: datetimes return their string presentation as
# bytes in 2.x and unicode in 3.x, so it's reasonable to expect that
# the parser will get both kinds. Internally we use unicode only.
if isinstance(timestr, binary_type):
timestr = timestr.decode()
if parserinfo:
return parser(parserinfo).parse(timestr, **kwargs)
else:
@@ -743,7 +764,7 @@ class _tzparser(object):
if l[i] in ('+', '-'):
# Yes, that's right. See the TZ variable
# documentation.
signal = (1,-1)[l[i] == '+']
signal = (1, -1)[l[i] == '+']
i += 1
else:
signal = -1
@@ -751,16 +772,16 @@ class _tzparser(object):
if len_li == 4:
# -0300
setattr(res, offattr,
(int(l[i][:2])*3600+int(l[i][2:])*60)*signal)
elif i+1 < len_l and l[i+1] == ':':
(int(l[i][:2]) * 3600 + int(l[i][2:]) * 60) * signal)
elif i + 1 < len_l and l[i + 1] == ':':
# -03:00
setattr(res, offattr,
(int(l[i])*3600+int(l[i+2])*60)*signal)
(int(l[i]) * 3600 + int(l[i + 2]) * 60) * signal)
i += 2
elif len_li <= 2:
# -[0]3
setattr(res, offattr,
int(l[i][:2])*3600*signal)
int(l[i][:2]) * 3600 * signal)
else:
return None
i += 1
@@ -787,29 +808,29 @@ class _tzparser(object):
x.month = int(l[i])
i += 2
if l[i] == '-':
value = int(l[i+1])*-1
value = int(l[i + 1]) * -1
i += 1
else:
value = int(l[i])
i += 2
if value:
x.week = value
x.weekday = (int(l[i])-1)%7
x.weekday = (int(l[i]) - 1) % 7
else:
x.day = int(l[i])
i += 2
x.time = int(l[i])
i += 2
if i < len_l:
if l[i] in ('-','+'):
signal = (-1,1)[l[i] == "+"]
if l[i] in ('-', '+'):
signal = (-1, 1)[l[i] == "+"]
i += 1
else:
signal = 1
res.dstoffset = (res.stdoffset+int(l[i]))*signal
res.dstoffset = (res.stdoffset + int(l[i])) * signal
elif (l.count(',') == 2 and l[i:].count('/') <= 2 and
not [y for x in l[i:] if x not in (',','/','J','M',
'.','-',':')
not [y for x in l[i:] if x not in (',', '/', 'J', 'M',
'.', '-', ':')
for y in x if y not in "0123456789"]):
for x in (res.start, res.end):
if l[i] == 'J':
@@ -829,10 +850,10 @@ class _tzparser(object):
i += 1
assert l[i] in ('-', '.')
i += 1
x.weekday = (int(l[i])-1)%7
x.weekday = (int(l[i]) - 1) % 7
else:
# year day (zero based)
x.yday = int(l[i])+1
x.yday = int(l[i]) + 1
i += 1
@@ -842,17 +863,17 @@ class _tzparser(object):
len_li = len(l[i])
if len_li == 4:
# -0300
x.time = (int(l[i][:2])*3600+int(l[i][2:])*60)
elif i+1 < len_l and l[i+1] == ':':
x.time = (int(l[i][:2]) * 3600 + int(l[i][2:]) * 60)
elif i + 1 < len_l and l[i + 1] == ':':
# -03:00
x.time = int(l[i])*3600+int(l[i+2])*60
x.time = int(l[i]) * 3600 + int(l[i + 2]) * 60
i += 2
if i+1 < len_l and l[i+1] == ':':
if i + 1 < len_l and l[i + 1] == ':':
i += 2
x.time += int(l[i])
elif len_li <= 2:
# -[0]3
x.time = (int(l[i][:2])*3600)
x.time = (int(l[i][:2]) * 3600)
else:
return None
i += 1
@@ -865,7 +886,7 @@ class _tzparser(object):
except (IndexError, ValueError, AssertionError):
return None
return res

View File

@@ -1,15 +1,16 @@
"""
Copyright (c) 2003-2010 Gustavo Niemeyer <gustavo@niemeyer.net>
This module offers extensions to the standard python 2.3+
This module offers extensions to the standard Python
datetime module.
"""
__author__ = "Gustavo Niemeyer <gustavo@niemeyer.net>"
__license__ = "PSF License"
__license__ = "Simplified BSD"
import datetime
import calendar
from six import integer_types
__all__ = ["relativedelta", "MO", "TU", "WE", "TH", "FR", "SA", "SU"]
class weekday(object):
@@ -42,7 +43,7 @@ class weekday(object):
MO, TU, WE, TH, FR, SA, SU = weekdays = tuple([weekday(x) for x in range(7)])
class relativedelta:
class relativedelta(object):
"""
The relativedelta type is based on the specification of the excelent
work done by M.-A. Lemburg in his mx.DateTime extension. However,
@@ -113,10 +114,9 @@ Here is the behavior of operations with relativedelta:
yearday=None, nlyearday=None,
hour=None, minute=None, second=None, microsecond=None):
if dt1 and dt2:
if not isinstance(dt1, datetime.date) or \
not isinstance(dt2, datetime.date):
raise TypeError, "relativedelta only diffs datetime/date"
if type(dt1) is not type(dt2):
if (not isinstance(dt1, datetime.date)) or (not isinstance(dt2, datetime.date)):
raise TypeError("relativedelta only diffs datetime/date")
if not type(dt1) == type(dt2): #isinstance(dt1, type(dt2)):
if not isinstance(dt1, datetime.datetime):
dt1 = datetime.datetime.fromordinal(dt1.toordinal())
elif not isinstance(dt2, datetime.datetime):
@@ -172,7 +172,7 @@ Here is the behavior of operations with relativedelta:
self.second = second
self.microsecond = microsecond
if type(weekday) is int:
if isinstance(weekday, integer_types):
self.weekday = weekdays[weekday]
else:
self.weekday = weekday
@@ -185,7 +185,7 @@ Here is the behavior of operations with relativedelta:
if yearday > 59:
self.leapdays = -1
if yday:
ydayidx = [31,59,90,120,151,181,212,243,273,304,334,366]
ydayidx = [31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 366]
for idx, ydays in enumerate(ydayidx):
if yday <= ydays:
self.month = idx+1
@@ -195,7 +195,7 @@ Here is the behavior of operations with relativedelta:
self.day = yday-ydayidx[idx-1]
break
else:
raise ValueError, "invalid year day (%d)" % yday
raise ValueError("invalid year day (%d)" % yday)
self._fix()
@@ -242,9 +242,26 @@ Here is the behavior of operations with relativedelta:
else:
self.years = 0
def __radd__(self, other):
def __add__(self, other):
if isinstance(other, relativedelta):
return relativedelta(years=other.years+self.years,
months=other.months+self.months,
days=other.days+self.days,
hours=other.hours+self.hours,
minutes=other.minutes+self.minutes,
seconds=other.seconds+self.seconds,
microseconds=other.microseconds+self.microseconds,
leapdays=other.leapdays or self.leapdays,
year=other.year or self.year,
month=other.month or self.month,
day=other.day or self.day,
weekday=other.weekday or self.weekday,
hour=other.hour or self.hour,
minute=other.minute or self.minute,
second=other.second or self.second,
microsecond=other.microsecond or self.microsecond)
if not isinstance(other, datetime.date):
raise TypeError, "unsupported type for add operation"
raise TypeError("unsupported type for add operation")
elif self._has_time and not isinstance(other, datetime.datetime):
other = datetime.datetime.fromordinal(other.toordinal())
year = (self.year or other.year)+self.years
@@ -285,48 +302,31 @@ Here is the behavior of operations with relativedelta:
ret += datetime.timedelta(days=jumpdays)
return ret
def __radd__(self, other):
return self.__add__(other)
def __rsub__(self, other):
return self.__neg__().__radd__(other)
def __add__(self, other):
if not isinstance(other, relativedelta):
raise TypeError, "unsupported type for add operation"
return relativedelta(years=other.years+self.years,
months=other.months+self.months,
days=other.days+self.days,
hours=other.hours+self.hours,
minutes=other.minutes+self.minutes,
seconds=other.seconds+self.seconds,
microseconds=other.microseconds+self.microseconds,
leapdays=other.leapdays or self.leapdays,
year=other.year or self.year,
month=other.month or self.month,
day=other.day or self.day,
weekday=other.weekday or self.weekday,
hour=other.hour or self.hour,
minute=other.minute or self.minute,
second=other.second or self.second,
microsecond=other.second or self.microsecond)
def __sub__(self, other):
if not isinstance(other, relativedelta):
raise TypeError, "unsupported type for sub operation"
return relativedelta(years=other.years-self.years,
months=other.months-self.months,
days=other.days-self.days,
hours=other.hours-self.hours,
minutes=other.minutes-self.minutes,
seconds=other.seconds-self.seconds,
microseconds=other.microseconds-self.microseconds,
leapdays=other.leapdays or self.leapdays,
year=other.year or self.year,
month=other.month or self.month,
day=other.day or self.day,
weekday=other.weekday or self.weekday,
hour=other.hour or self.hour,
minute=other.minute or self.minute,
second=other.second or self.second,
microsecond=other.second or self.microsecond)
raise TypeError("unsupported type for sub operation")
return relativedelta(years=self.years-other.years,
months=self.months-other.months,
days=self.days-other.days,
hours=self.hours-other.hours,
minutes=self.minutes-other.minutes,
seconds=self.seconds-other.seconds,
microseconds=self.microseconds-other.microseconds,
leapdays=self.leapdays or other.leapdays,
year=self.year or other.year,
month=self.month or other.month,
day=self.day or other.day,
weekday=self.weekday or other.weekday,
hour=self.hour or other.hour,
minute=self.minute or other.minute,
second=self.second or other.second,
microsecond=self.microsecond or other.microsecond)
def __neg__(self):
return relativedelta(years=-self.years,
@@ -346,7 +346,7 @@ Here is the behavior of operations with relativedelta:
second=self.second,
microsecond=self.microsecond)
def __nonzero__(self):
def __bool__(self):
return not (not self.years and
not self.months and
not self.days and
@@ -366,13 +366,13 @@ Here is the behavior of operations with relativedelta:
def __mul__(self, other):
f = float(other)
return relativedelta(years=self.years*f,
months=self.months*f,
days=self.days*f,
hours=self.hours*f,
minutes=self.minutes*f,
seconds=self.seconds*f,
microseconds=self.microseconds*f,
return relativedelta(years=int(self.years*f),
months=int(self.months*f),
days=int(self.days*f),
hours=int(self.hours*f),
minutes=int(self.minutes*f),
seconds=int(self.seconds*f),
microseconds=int(self.microseconds*f),
leapdays=self.leapdays,
year=self.year,
month=self.month,
@@ -383,6 +383,8 @@ Here is the behavior of operations with relativedelta:
second=self.second,
microsecond=self.microsecond)
__rmul__ = __mul__
def __eq__(self, other):
if not isinstance(other, relativedelta):
return False
@@ -415,6 +417,8 @@ Here is the behavior of operations with relativedelta:
def __div__(self, other):
return self.__mul__(1/float(other))
__truediv__ = __div__
def __repr__(self):
l = []
for attr in ["years", "months", "days", "leapdays",
@@ -426,7 +430,7 @@ Here is the behavior of operations with relativedelta:
"hour", "minute", "second", "microsecond"]:
value = getattr(self, attr)
if value is not None:
l.append("%s=%s" % (attr, `value`))
l.append("%s=%s" % (attr, repr(value)))
return "%s(%s)" % (self.__class__.__name__, ", ".join(l))
# vim:ts=4:sw=4:et

View File

@@ -1,18 +1,22 @@
"""
Copyright (c) 2003-2010 Gustavo Niemeyer <gustavo@niemeyer.net>
This module offers extensions to the standard python 2.3+
This module offers extensions to the standard Python
datetime module.
"""
__author__ = "Gustavo Niemeyer <gustavo@niemeyer.net>"
__license__ = "PSF License"
__license__ = "Simplified BSD"
import itertools
import datetime
import calendar
import thread
try:
import _thread
except ImportError:
import thread as _thread
import sys
from six import advance_iterator, integer_types
__all__ = ["rrule", "rruleset", "rrulestr",
"YEARLY", "MONTHLY", "WEEKLY", "DAILY",
"HOURLY", "MINUTELY", "SECONDLY",
@@ -22,15 +26,15 @@ __all__ = ["rrule", "rruleset", "rrulestr",
M366MASK = tuple([1]*31+[2]*29+[3]*31+[4]*30+[5]*31+[6]*30+
[7]*31+[8]*31+[9]*30+[10]*31+[11]*30+[12]*31+[1]*7)
M365MASK = list(M366MASK)
M29, M30, M31 = range(1,30), range(1,31), range(1,32)
M29, M30, M31 = list(range(1, 30)), list(range(1, 31)), list(range(1, 32))
MDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7])
MDAY365MASK = list(MDAY366MASK)
M29, M30, M31 = range(-29,0), range(-30,0), range(-31,0)
M29, M30, M31 = list(range(-29, 0)), list(range(-30, 0)), list(range(-31, 0))
NMDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7])
NMDAY365MASK = list(NMDAY366MASK)
M366RANGE = (0,31,60,91,121,152,182,213,244,274,305,335,366)
M365RANGE = (0,31,59,90,120,151,181,212,243,273,304,334,365)
WDAYMASK = [0,1,2,3,4,5,6]*55
M366RANGE = (0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366)
M365RANGE = (0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365)
WDAYMASK = [0, 1, 2, 3, 4, 5, 6]*55
del M29, M30, M31, M365MASK[59], MDAY365MASK[59], NMDAY365MASK[31]
MDAY365MASK = tuple(MDAY365MASK)
M365MASK = tuple(M365MASK)
@@ -41,7 +45,7 @@ M365MASK = tuple(M365MASK)
DAILY,
HOURLY,
MINUTELY,
SECONDLY) = range(7)
SECONDLY) = list(range(7))
# Imported on demand.
easter = None
@@ -52,7 +56,7 @@ class weekday(object):
def __init__(self, weekday, n=None):
if n == 0:
raise ValueError, "Can't create weekday with n == 0"
raise ValueError("Can't create weekday with n == 0")
self.weekday = weekday
self.n = n
@@ -79,11 +83,11 @@ class weekday(object):
MO, TU, WE, TH, FR, SA, SU = weekdays = tuple([weekday(x) for x in range(7)])
class rrulebase:
class rrulebase(object):
def __init__(self, cache=False):
if cache:
self._cache = []
self._cache_lock = thread.allocate_lock()
self._cache_lock = _thread.allocate_lock()
self._cache_gen = self._iter()
self._cache_complete = False
else:
@@ -112,7 +116,7 @@ class rrulebase:
break
try:
for j in range(10):
cache.append(gen.next())
cache.append(advance_iterator(gen))
except StopIteration:
self._cache_gen = gen = None
self._cache_complete = True
@@ -133,13 +137,13 @@ class rrulebase:
else:
return list(itertools.islice(self,
item.start or 0,
item.stop or sys.maxint,
item.stop or sys.maxsize,
item.step or 1))
elif item >= 0:
gen = iter(self)
try:
for i in range(item+1):
res = gen.next()
res = advance_iterator(gen)
except StopIteration:
raise IndexError
return res
@@ -232,7 +236,7 @@ class rrule(rrulebase):
byweekno=None, byweekday=None,
byhour=None, byminute=None, bysecond=None,
cache=False):
rrulebase.__init__(self, cache)
super(rrule, self).__init__(cache)
global easter
if not dtstart:
dtstart = datetime.datetime.now().replace(microsecond=0)
@@ -250,13 +254,13 @@ class rrule(rrulebase):
self._until = until
if wkst is None:
self._wkst = calendar.firstweekday()
elif type(wkst) is int:
elif isinstance(wkst, integer_types):
self._wkst = wkst
else:
self._wkst = wkst.weekday
if bysetpos is None:
self._bysetpos = None
elif type(bysetpos) is int:
elif isinstance(bysetpos, integer_types):
if bysetpos == 0 or not (-366 <= bysetpos <= 366):
raise ValueError("bysetpos must be between 1 and 366, "
"or between -366 and -1")
@@ -280,14 +284,14 @@ class rrule(rrulebase):
# bymonth
if not bymonth:
self._bymonth = None
elif type(bymonth) is int:
elif isinstance(bymonth, integer_types):
self._bymonth = (bymonth,)
else:
self._bymonth = tuple(bymonth)
# byyearday
if not byyearday:
self._byyearday = None
elif type(byyearday) is int:
elif isinstance(byyearday, integer_types):
self._byyearday = (byyearday,)
else:
self._byyearday = tuple(byyearday)
@@ -295,7 +299,7 @@ class rrule(rrulebase):
if byeaster is not None:
if not easter:
from dateutil import easter
if type(byeaster) is int:
if isinstance(byeaster, integer_types):
self._byeaster = (byeaster,)
else:
self._byeaster = tuple(byeaster)
@@ -305,7 +309,7 @@ class rrule(rrulebase):
if not bymonthday:
self._bymonthday = ()
self._bynmonthday = ()
elif type(bymonthday) is int:
elif isinstance(bymonthday, integer_types):
if bymonthday < 0:
self._bynmonthday = (bymonthday,)
self._bymonthday = ()
@@ -318,7 +322,7 @@ class rrule(rrulebase):
# byweekno
if byweekno is None:
self._byweekno = None
elif type(byweekno) is int:
elif isinstance(byweekno, integer_types):
self._byweekno = (byweekno,)
else:
self._byweekno = tuple(byweekno)
@@ -326,7 +330,7 @@ class rrule(rrulebase):
if byweekday is None:
self._byweekday = None
self._bynweekday = None
elif type(byweekday) is int:
elif isinstance(byweekday, integer_types):
self._byweekday = (byweekday,)
self._bynweekday = None
elif hasattr(byweekday, "n"):
@@ -340,7 +344,7 @@ class rrule(rrulebase):
self._byweekday = []
self._bynweekday = []
for wday in byweekday:
if type(wday) is int:
if isinstance(wday, integer_types):
self._byweekday.append(wday)
elif not wday.n or freq > MONTHLY:
self._byweekday.append(wday.weekday)
@@ -358,7 +362,7 @@ class rrule(rrulebase):
self._byhour = (dtstart.hour,)
else:
self._byhour = None
elif type(byhour) is int:
elif isinstance(byhour, integer_types):
self._byhour = (byhour,)
else:
self._byhour = tuple(byhour)
@@ -368,7 +372,7 @@ class rrule(rrulebase):
self._byminute = (dtstart.minute,)
else:
self._byminute = None
elif type(byminute) is int:
elif isinstance(byminute, integer_types):
self._byminute = (byminute,)
else:
self._byminute = tuple(byminute)
@@ -378,7 +382,7 @@ class rrule(rrulebase):
self._bysecond = (dtstart.second,)
else:
self._bysecond = None
elif type(bysecond) is int:
elif isinstance(bysecond, integer_types):
self._bysecond = (bysecond,)
else:
self._bysecond = tuple(bysecond)
@@ -716,7 +720,7 @@ class _iterinfo(object):
# days from last year's last week number in
# this year.
if -1 not in rr._byweekno:
lyearweekday = datetime.date(year-1,1,1).weekday()
lyearweekday = datetime.date(year-1, 1, 1).weekday()
lno1wkst = (7-lyearweekday+rr._wkst)%7
lyearlen = 365+calendar.isleap(year-1)
if lno1wkst >= 4:
@@ -768,7 +772,7 @@ class _iterinfo(object):
self.lastmonth = month
def ydayset(self, year, month, day):
return range(self.yearlen), 0, self.yearlen
return list(range(self.yearlen)), 0, self.yearlen
def mdayset(self, year, month, day):
set = [None]*self.yearlen
@@ -823,27 +827,38 @@ class _iterinfo(object):
class rruleset(rrulebase):
class _genitem:
class _genitem(object):
def __init__(self, genlist, gen):
try:
self.dt = gen()
self.dt = advance_iterator(gen)
genlist.append(self)
except StopIteration:
pass
self.genlist = genlist
self.gen = gen
def next(self):
def __next__(self):
try:
self.dt = self.gen()
self.dt = advance_iterator(self.gen)
except StopIteration:
self.genlist.remove(self)
def __cmp__(self, other):
return cmp(self.dt, other.dt)
next = __next__
def __lt__(self, other):
return self.dt < other.dt
def __gt__(self, other):
return self.dt > other.dt
def __eq__(self, other):
return self.dt == other.dt
def __ne__(self, other):
return self.dt != other.dt
def __init__(self, cache=False):
rrulebase.__init__(self, cache)
super(rruleset, self).__init__(cache)
self._rrule = []
self._rdate = []
self._exrule = []
@@ -851,7 +866,7 @@ class rruleset(rrulebase):
def rrule(self, rrule):
self._rrule.append(rrule)
def rdate(self, rdate):
self._rdate.append(rdate)
@@ -864,14 +879,14 @@ class rruleset(rrulebase):
def _iter(self):
rlist = []
self._rdate.sort()
self._genitem(rlist, iter(self._rdate).next)
for gen in [iter(x).next for x in self._rrule]:
self._genitem(rlist, iter(self._rdate))
for gen in [iter(x) for x in self._rrule]:
self._genitem(rlist, gen)
rlist.sort()
exlist = []
self._exdate.sort()
self._genitem(exlist, iter(self._exdate).next)
for gen in [iter(x).next for x in self._exrule]:
self._genitem(exlist, iter(self._exdate))
for gen in [iter(x) for x in self._exrule]:
self._genitem(exlist, gen)
exlist.sort()
lastdt = None
@@ -880,17 +895,17 @@ class rruleset(rrulebase):
ritem = rlist[0]
if not lastdt or lastdt != ritem.dt:
while exlist and exlist[0] < ritem:
exlist[0].next()
advance_iterator(exlist[0])
exlist.sort()
if not exlist or ritem != exlist[0]:
total += 1
yield ritem.dt
lastdt = ritem.dt
ritem.next()
advance_iterator(ritem)
rlist.sort()
self._len = total
class _rrulestr:
class _rrulestr(object):
_freq_map = {"YEARLY": YEARLY,
"MONTHLY": MONTHLY,
@@ -932,7 +947,7 @@ class _rrulestr:
ignoretz=kwargs.get("ignoretz"),
tzinfos=kwargs.get("tzinfos"))
except ValueError:
raise ValueError, "invalid until date"
raise ValueError("invalid until date")
def _handle_WKST(self, rrkwargs, name, value, **kwargs):
rrkwargs["wkst"] = self._weekday_map[value]
@@ -959,7 +974,7 @@ class _rrulestr:
if line.find(':') != -1:
name, value = line.split(':')
if name != "RRULE":
raise ValueError, "unknown parameter name"
raise ValueError("unknown parameter name")
else:
value = line
rrkwargs = {}
@@ -972,9 +987,9 @@ class _rrulestr:
ignoretz=ignoretz,
tzinfos=tzinfos)
except AttributeError:
raise ValueError, "unknown parameter '%s'" % name
raise ValueError("unknown parameter '%s'" % name)
except (KeyError, ValueError):
raise ValueError, "invalid '%s': %s" % (name, value)
raise ValueError("invalid '%s': %s" % (name, value))
return rrule(dtstart=dtstart, cache=cache, **rrkwargs)
def _parse_rfc(self, s,
@@ -991,7 +1006,7 @@ class _rrulestr:
unfold = True
s = s.upper()
if not s.strip():
raise ValueError, "empty string"
raise ValueError("empty string")
if unfold:
lines = s.splitlines()
i = 0
@@ -1026,36 +1041,36 @@ class _rrulestr:
name, value = line.split(':', 1)
parms = name.split(';')
if not parms:
raise ValueError, "empty property name"
raise ValueError("empty property name")
name = parms[0]
parms = parms[1:]
if name == "RRULE":
for parm in parms:
raise ValueError, "unsupported RRULE parm: "+parm
raise ValueError("unsupported RRULE parm: "+parm)
rrulevals.append(value)
elif name == "RDATE":
for parm in parms:
if parm != "VALUE=DATE-TIME":
raise ValueError, "unsupported RDATE parm: "+parm
raise ValueError("unsupported RDATE parm: "+parm)
rdatevals.append(value)
elif name == "EXRULE":
for parm in parms:
raise ValueError, "unsupported EXRULE parm: "+parm
raise ValueError("unsupported EXRULE parm: "+parm)
exrulevals.append(value)
elif name == "EXDATE":
for parm in parms:
if parm != "VALUE=DATE-TIME":
raise ValueError, "unsupported RDATE parm: "+parm
raise ValueError("unsupported RDATE parm: "+parm)
exdatevals.append(value)
elif name == "DTSTART":
for parm in parms:
raise ValueError, "unsupported DTSTART parm: "+parm
raise ValueError("unsupported DTSTART parm: "+parm)
if not parser:
from dateutil import parser
dtstart = parser.parse(value, ignoretz=ignoretz,
tzinfos=tzinfos)
else:
raise ValueError, "unsupported property: "+name
raise ValueError("unsupported property: "+name)
if (forceset or len(rrulevals) > 1 or
rdatevals or exrulevals or exdatevals):
if not parser and (rdatevals or exdatevals):

View File

@@ -1,11 +1,12 @@
"""
Copyright (c) 2003-2007 Gustavo Niemeyer <gustavo@niemeyer.net>
This module offers extensions to the standard python 2.3+
This module offers extensions to the standard Python
datetime module.
"""
__author__ = "Gustavo Niemeyer <gustavo@niemeyer.net>"
__license__ = "PSF License"
__license__ = "Simplified BSD"
from six import string_types, PY3
import datetime
import struct
@@ -25,6 +26,19 @@ try:
except (ImportError, OSError):
tzwin, tzwinlocal = None, None
def tzname_in_python2(myfunc):
"""Change unicode output into bytestrings in Python 2
tzname() API changed in Python 3. It used to return bytes, but was changed
to unicode strings
"""
def inner_func(*args, **kwargs):
if PY3:
return myfunc(*args, **kwargs)
else:
return myfunc(*args, **kwargs).encode()
return inner_func
ZERO = datetime.timedelta(0)
EPOCHORDINAL = datetime.datetime.utcfromtimestamp(0).toordinal()
@@ -36,6 +50,7 @@ class tzutc(datetime.tzinfo):
def dst(self, dt):
return ZERO
@tzname_in_python2
def tzname(self, dt):
return "UTC"
@@ -63,6 +78,7 @@ class tzoffset(datetime.tzinfo):
def dst(self, dt):
return ZERO
@tzname_in_python2
def tzname(self, dt):
return self._name
@@ -75,7 +91,7 @@ class tzoffset(datetime.tzinfo):
def __repr__(self):
return "%s(%s, %s)" % (self.__class__.__name__,
`self._name`,
repr(self._name),
self._offset.days*86400+self._offset.seconds)
__reduce__ = object.__reduce__
@@ -100,6 +116,7 @@ class tzlocal(datetime.tzinfo):
else:
return ZERO
@tzname_in_python2
def tzname(self, dt):
return time.tzname[self._isdst(dt)]
@@ -161,7 +178,7 @@ class _ttinfo(object):
for attr in self.__slots__:
value = getattr(self, attr)
if value is not None:
l.append("%s=%s" % (attr, `value`))
l.append("%s=%s" % (attr, repr(value)))
return "%s(%s)" % (self.__class__.__name__, ", ".join(l))
def __eq__(self, other):
@@ -191,16 +208,16 @@ class _ttinfo(object):
class tzfile(datetime.tzinfo):
# http://www.twinsun.com/tz/tz-link.htm
# ftp://elsie.nci.nih.gov/pub/tz*.tar.gz
# ftp://ftp.iana.org/tz/tz*.tar.gz
def __init__(self, fileobj):
if isinstance(fileobj, basestring):
if isinstance(fileobj, string_types):
self._filename = fileobj
fileobj = open(fileobj)
fileobj = open(fileobj, 'rb')
elif hasattr(fileobj, "name"):
self._filename = fileobj.name
else:
self._filename = `fileobj`
self._filename = repr(fileobj)
# From tzfile(5):
#
@@ -212,8 +229,8 @@ class tzfile(datetime.tzinfo):
# ``standard'' byte order (the high-order byte
# of the value is written first).
if fileobj.read(4) != "TZif":
raise ValueError, "magic not found"
if fileobj.read(4).decode() != "TZif":
raise ValueError("magic not found")
fileobj.read(16)
@@ -284,7 +301,7 @@ class tzfile(datetime.tzinfo):
for i in range(typecnt):
ttinfo.append(struct.unpack(">lbb", fileobj.read(6)))
abbr = fileobj.read(charcnt)
abbr = fileobj.read(charcnt).decode()
# Then there are tzh_leapcnt pairs of four-byte
# values, written in standard byte order; the
@@ -360,7 +377,7 @@ class tzfile(datetime.tzinfo):
if not self._trans_list:
self._ttinfo_std = self._ttinfo_first = self._ttinfo_list[0]
else:
for i in range(timecnt-1,-1,-1):
for i in range(timecnt-1, -1, -1):
tti = self._trans_idx[i]
if not self._ttinfo_std and not tti.isdst:
self._ttinfo_std = tti
@@ -448,6 +465,7 @@ class tzfile(datetime.tzinfo):
# dst offset, so I belive that this wouldn't be the right
# way to implement this.
@tzname_in_python2
def tzname(self, dt):
if not self._ttinfo_std:
return None
@@ -465,11 +483,11 @@ class tzfile(datetime.tzinfo):
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, `self._filename`)
return "%s(%s)" % (self.__class__.__name__, repr(self._filename))
def __reduce__(self):
if not os.path.isfile(self._filename):
raise ValueError, "Unpickable %s class" % self.__class__.__name__
raise ValueError("Unpickable %s class" % self.__class__.__name__)
return (self.__class__, (self._filename,))
class tzrange(datetime.tzinfo):
@@ -515,6 +533,7 @@ class tzrange(datetime.tzinfo):
else:
return ZERO
@tzname_in_python2
def tzname(self, dt):
if self._isdst(dt):
return self._dst_abbr
@@ -524,7 +543,7 @@ class tzrange(datetime.tzinfo):
def _isdst(self, dt):
if not self._start_delta:
return False
year = datetime.datetime(dt.year,1,1)
year = datetime.datetime(dt.year, 1, 1)
start = year+self._start_delta
end = year+self._end_delta
dt = dt.replace(tzinfo=None)
@@ -561,7 +580,7 @@ class tzstr(tzrange):
res = parser._parsetz(s)
if res is None:
raise ValueError, "unknown string format"
raise ValueError("unknown string format")
# Here we break the compatibility with the TZ variable handling.
# GMT-3 actually *means* the timezone -3.
@@ -624,9 +643,9 @@ class tzstr(tzrange):
return relativedelta.relativedelta(**kwargs)
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, `self._s`)
return "%s(%s)" % (self.__class__.__name__, repr(self._s))
class _tzicalvtzcomp:
class _tzicalvtzcomp(object):
def __init__(self, tzoffsetfrom, tzoffsetto, isdst,
tzname=None, rrule=None):
self.tzoffsetfrom = datetime.timedelta(seconds=tzoffsetfrom)
@@ -690,51 +709,52 @@ class _tzicalvtz(datetime.tzinfo):
else:
return ZERO
@tzname_in_python2
def tzname(self, dt):
return self._find_comp(dt).tzname
def __repr__(self):
return "<tzicalvtz %s>" % `self._tzid`
return "<tzicalvtz %s>" % repr(self._tzid)
__reduce__ = object.__reduce__
class tzical:
class tzical(object):
def __init__(self, fileobj):
global rrule
if not rrule:
from dateutil import rrule
if isinstance(fileobj, basestring):
if isinstance(fileobj, string_types):
self._s = fileobj
fileobj = open(fileobj)
fileobj = open(fileobj, 'r') # ical should be encoded in UTF-8 with CRLF
elif hasattr(fileobj, "name"):
self._s = fileobj.name
else:
self._s = `fileobj`
self._s = repr(fileobj)
self._vtz = {}
self._parse_rfc(fileobj.read())
def keys(self):
return self._vtz.keys()
return list(self._vtz.keys())
def get(self, tzid=None):
if tzid is None:
keys = self._vtz.keys()
keys = list(self._vtz.keys())
if len(keys) == 0:
raise ValueError, "no timezones defined"
raise ValueError("no timezones defined")
elif len(keys) > 1:
raise ValueError, "more than one timezone available"
raise ValueError("more than one timezone available")
tzid = keys[0]
return self._vtz.get(tzid)
def _parse_offset(self, s):
s = s.strip()
if not s:
raise ValueError, "empty offset"
raise ValueError("empty offset")
if s[0] in ('+', '-'):
signal = (-1,+1)[s[0]=='+']
signal = (-1, +1)[s[0]=='+']
s = s[1:]
else:
signal = +1
@@ -743,12 +763,12 @@ class tzical:
elif len(s) == 6:
return (int(s[:2])*3600+int(s[2:4])*60+int(s[4:]))*signal
else:
raise ValueError, "invalid offset: "+s
raise ValueError("invalid offset: "+s)
def _parse_rfc(self, s):
lines = s.splitlines()
if not lines:
raise ValueError, "empty string"
raise ValueError("empty string")
# Unfold
i = 0
@@ -772,7 +792,7 @@ class tzical:
name, value = line.split(':', 1)
parms = name.split(';')
if not parms:
raise ValueError, "empty property name"
raise ValueError("empty property name")
name = parms[0].upper()
parms = parms[1:]
if invtz:
@@ -781,7 +801,7 @@ class tzical:
# Process component
pass
else:
raise ValueError, "unknown component: "+value
raise ValueError("unknown component: "+value)
comptype = value
founddtstart = False
tzoffsetfrom = None
@@ -791,27 +811,21 @@ class tzical:
elif name == "END":
if value == "VTIMEZONE":
if comptype:
raise ValueError, \
"component not closed: "+comptype
raise ValueError("component not closed: "+comptype)
if not tzid:
raise ValueError, \
"mandatory TZID not found"
raise ValueError("mandatory TZID not found")
if not comps:
raise ValueError, \
"at least one component is needed"
raise ValueError("at least one component is needed")
# Process vtimezone
self._vtz[tzid] = _tzicalvtz(tzid, comps)
invtz = False
elif value == comptype:
if not founddtstart:
raise ValueError, \
"mandatory DTSTART not found"
raise ValueError("mandatory DTSTART not found")
if tzoffsetfrom is None:
raise ValueError, \
"mandatory TZOFFSETFROM not found"
raise ValueError("mandatory TZOFFSETFROM not found")
if tzoffsetto is None:
raise ValueError, \
"mandatory TZOFFSETFROM not found"
raise ValueError("mandatory TZOFFSETFROM not found")
# Process component
rr = None
if rrulelines:
@@ -825,8 +839,7 @@ class tzical:
comps.append(comp)
comptype = None
else:
raise ValueError, \
"invalid component end: "+value
raise ValueError("invalid component end: "+value)
elif comptype:
if name == "DTSTART":
rrulelines.append(line)
@@ -835,40 +848,36 @@ class tzical:
rrulelines.append(line)
elif name == "TZOFFSETFROM":
if parms:
raise ValueError, \
"unsupported %s parm: %s "%(name, parms[0])
raise ValueError("unsupported %s parm: %s "%(name, parms[0]))
tzoffsetfrom = self._parse_offset(value)
elif name == "TZOFFSETTO":
if parms:
raise ValueError, \
"unsupported TZOFFSETTO parm: "+parms[0]
raise ValueError("unsupported TZOFFSETTO parm: "+parms[0])
tzoffsetto = self._parse_offset(value)
elif name == "TZNAME":
if parms:
raise ValueError, \
"unsupported TZNAME parm: "+parms[0]
raise ValueError("unsupported TZNAME parm: "+parms[0])
tzname = value
elif name == "COMMENT":
pass
else:
raise ValueError, "unsupported property: "+name
raise ValueError("unsupported property: "+name)
else:
if name == "TZID":
if parms:
raise ValueError, \
"unsupported TZID parm: "+parms[0]
raise ValueError("unsupported TZID parm: "+parms[0])
tzid = value
elif name in ("TZURL", "LAST-MODIFIED", "COMMENT"):
pass
else:
raise ValueError, "unsupported property: "+name
raise ValueError("unsupported property: "+name)
elif name == "BEGIN" and value == "VTIMEZONE":
tzid = None
comps = []
invtz = True
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__, `self._s`)
return "%s(%s)" % (self.__class__.__name__, repr(self._s))
if sys.platform != "win32":
TZFILES = ["/etc/localtime", "localtime"]
@@ -914,7 +923,7 @@ def gettz(name=None):
for path in TZPATHS:
filepath = os.path.join(path, name)
if not os.path.isfile(filepath):
filepath = filepath.replace(' ','_')
filepath = filepath.replace(' ', '_')
if not os.path.isfile(filepath):
continue
try:

View File

@@ -1,9 +1,8 @@
# This code was originally contributed by Jeffrey Harris.
import datetime
import struct
import _winreg
import winreg
__author__ = "Jeffrey Harris & Gustavo Niemeyer <gustavo@niemeyer.net>"
__all__ = ["tzwin", "tzwinlocal"]
@@ -15,9 +14,9 @@ TZLOCALKEYNAME = r"SYSTEM\CurrentControlSet\Control\TimeZoneInformation"
def _settzkeyname():
global TZKEYNAME
handle = _winreg.ConnectRegistry(None, _winreg.HKEY_LOCAL_MACHINE)
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
try:
_winreg.OpenKey(handle, TZKEYNAMENT).Close()
winreg.OpenKey(handle, TZKEYNAMENT).Close()
TZKEYNAME = TZKEYNAMENT
except WindowsError:
TZKEYNAME = TZKEYNAME9X
@@ -49,10 +48,10 @@ class tzwinbase(datetime.tzinfo):
def list():
"""Return a list of all time zones known to the system."""
handle = _winreg.ConnectRegistry(None, _winreg.HKEY_LOCAL_MACHINE)
tzkey = _winreg.OpenKey(handle, TZKEYNAME)
result = [_winreg.EnumKey(tzkey, i)
for i in range(_winreg.QueryInfoKey(tzkey)[0])]
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
tzkey = winreg.OpenKey(handle, TZKEYNAME)
result = [winreg.EnumKey(tzkey, i)
for i in range(winreg.QueryInfoKey(tzkey)[0])]
tzkey.Close()
handle.Close()
return result
@@ -79,8 +78,8 @@ class tzwin(tzwinbase):
def __init__(self, name):
self._name = name
handle = _winreg.ConnectRegistry(None, _winreg.HKEY_LOCAL_MACHINE)
tzkey = _winreg.OpenKey(handle, "%s\%s" % (TZKEYNAME, name))
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
tzkey = winreg.OpenKey(handle, "%s\%s" % (TZKEYNAME, name))
keydict = valuestodict(tzkey)
tzkey.Close()
handle.Close()
@@ -118,9 +117,9 @@ class tzwinlocal(tzwinbase):
def __init__(self):
handle = _winreg.ConnectRegistry(None, _winreg.HKEY_LOCAL_MACHINE)
handle = winreg.ConnectRegistry(None, winreg.HKEY_LOCAL_MACHINE)
tzlocalkey = _winreg.OpenKey(handle, TZLOCALKEYNAME)
tzlocalkey = winreg.OpenKey(handle, TZLOCALKEYNAME)
keydict = valuestodict(tzlocalkey)
tzlocalkey.Close()
@@ -128,7 +127,7 @@ class tzwinlocal(tzwinbase):
self._dstname = keydict["DaylightName"].encode("iso-8859-1")
try:
tzkey = _winreg.OpenKey(handle, "%s\%s"%(TZKEYNAME, self._stdname))
tzkey = winreg.OpenKey(handle, "%s\%s"%(TZKEYNAME, self._stdname))
_keydict = valuestodict(tzkey)
self._display = _keydict["Display"]
tzkey.Close()
@@ -165,7 +164,7 @@ def picknthweekday(year, month, dayofweek, hour, minute, whichweek):
"""dayofweek == 0 means Sunday, whichweek 5 means last instance"""
first = datetime.datetime(year, month, 1, hour, minute)
weekdayone = first.replace(day=((dayofweek-first.isoweekday())%7+1))
for n in xrange(whichweek):
for n in range(whichweek):
dt = weekdayone+(whichweek-n)*ONEWEEK
if dt.month == month:
return dt
@@ -173,8 +172,8 @@ def picknthweekday(year, month, dayofweek, hour, minute, whichweek):
def valuestodict(key):
"""Convert a registry key's values to a dictionary."""
dict = {}
size = _winreg.QueryInfoKey(key)[1]
size = winreg.QueryInfoKey(key)[1]
for i in range(size):
data = _winreg.EnumValue(key, i)
data = winreg.EnumValue(key, i)
dict[data[0]] = data[1]
return dict

View File

@@ -1,15 +1,16 @@
# -*- coding: utf-8 -*-
"""
Copyright (c) 2003-2005 Gustavo Niemeyer <gustavo@niemeyer.net>
This module offers extensions to the standard python 2.3+
This module offers extensions to the standard Python
datetime module.
"""
from dateutil.tz import tzfile
from tarfile import TarFile
import os
__author__ = "Gustavo Niemeyer <gustavo@niemeyer.net>"
__license__ = "PSF License"
__author__ = "Tomi Pieviläinen <tomi.pievilainen@iki.fi>"
__license__ = "Simplified BSD"
__all__ = ["setcachesize", "gettz", "rebuild"]
@@ -21,8 +22,7 @@ class tzfile(tzfile):
return (gettz, (self._filename,))
def getzoneinfofile():
filenames = os.listdir(os.path.join(os.path.dirname(__file__)))
filenames.sort()
filenames = sorted(os.listdir(os.path.join(os.path.dirname(__file__))))
filenames.reverse()
for entry in filenames:
if entry.startswith("zoneinfo") and ".tar." in entry:
@@ -66,7 +66,10 @@ def rebuild(filename, tag=None, format="gz"):
targetname = "zoneinfo%s.tar.%s" % (tag, format)
try:
tf = TarFile.open(filename)
for name in tf.getnames():
# The "backwards" zone file contains links to other files, so must be
# processed as last
for name in sorted(tf.getnames(),
key=lambda k: k != "backward" and k or "z"):
if not (name.endswith(".sh") or
name.endswith(".tab") or
name == "leapseconds"):

Binary file not shown.

View File

@@ -1,8 +1,9 @@
import hashlib
import re
import hashlib
import time
import StringIO
__version__ = '0.6'
__version__ = '0.8'
#GNTP/<version> <messagetype> <encryptionAlgorithmID>[:<ivValue>][ <keyHashAlgorithmID>:<keyHash>.<salt>]
GNTP_INFO_LINE = re.compile(
@@ -19,7 +20,7 @@ GNTP_INFO_LINE_SHORT = re.compile(
GNTP_HEADER = re.compile('([\w-]+):(.+)')
GNTP_EOL = u'\r\n'
GNTP_EOL = '\r\n'
class BaseError(Exception):
@@ -43,6 +44,14 @@ class UnsupportedError(BaseError):
errordesc = 'Currently unsupported by gntp.py'
class _GNTPBuffer(StringIO.StringIO):
"""GNTP Buffer class"""
def writefmt(self, message = "", *args):
"""Shortcut function for writing GNTP Headers"""
self.write((message % args).encode('utf8', 'replace'))
self.write(GNTP_EOL)
class _GNTPBase(object):
"""Base initilization
@@ -206,8 +215,8 @@ class _GNTPBase(object):
if not match:
continue
key = match.group(1).strip()
val = match.group(2).strip()
key = unicode(match.group(1).strip(), 'utf8', 'replace')
val = unicode(match.group(2).strip(), 'utf8', 'replace')
dict[key] = val
return dict
@@ -217,6 +226,15 @@ class _GNTPBase(object):
else:
self.headers[key] = unicode('%s' % value, 'utf8', 'replace')
def add_resource(self, data):
"""Add binary resource
:param string data: Binary Data
"""
identifier = hashlib.md5(data).hexdigest()
self.resources[identifier] = data
return 'x-growl-resource://%s' % identifier
def decode(self, data, password = None):
"""Decode GNTP Message
@@ -229,19 +247,30 @@ class _GNTPBase(object):
self.headers = self._parse_dict(parts[0])
def encode(self):
"""Encode a GNTP Message
"""Encode a generic GNTP Message
:return string: Encoded GNTP Message ready to be sent
:return string: GNTP Message ready to be sent
"""
self.validate()
message = self._format_info() + GNTP_EOL
buffer = _GNTPBuffer()
buffer.writefmt(self._format_info())
#Headers
for k, v in self.headers.iteritems():
message += u'%s: %s%s' % (k, v, GNTP_EOL)
buffer.writefmt('%s: %s', k, v)
buffer.writefmt()
message += GNTP_EOL
return message
#Resources
for resource, data in self.resources.iteritems():
buffer.writefmt('Identifier: %s', resource)
buffer.writefmt('Length: %d', len(data))
buffer.writefmt()
buffer.write(data)
buffer.writefmt()
buffer.writefmt()
return buffer.getvalue()
class GNTPRegister(_GNTPBase):
@@ -290,7 +319,7 @@ class GNTPRegister(_GNTPBase):
for i, part in enumerate(parts):
if i == 0:
continue # Skip Header
continue # Skip Header
if part.strip() == '':
continue
notice = self._parse_dict(part)
@@ -319,22 +348,33 @@ class GNTPRegister(_GNTPBase):
:return string: Encoded GNTP Registration message
"""
self.validate()
message = self._format_info() + GNTP_EOL
buffer = _GNTPBuffer()
buffer.writefmt(self._format_info())
#Headers
for k, v in self.headers.iteritems():
message += u'%s: %s%s' % (k, v, GNTP_EOL)
buffer.writefmt('%s: %s', k, v)
buffer.writefmt()
#Notifications
if len(self.notifications) > 0:
for notice in self.notifications:
message += GNTP_EOL
for k, v in notice.iteritems():
message += u'%s: %s%s' % (k, v, GNTP_EOL)
buffer.writefmt('%s: %s', k, v)
buffer.writefmt()
message += GNTP_EOL
return message
#Resources
for resource, data in self.resources.iteritems():
buffer.writefmt('Identifier: %s', resource)
buffer.writefmt('Length: %d', len(data))
buffer.writefmt()
buffer.write(data)
buffer.writefmt()
buffer.writefmt()
return buffer.getvalue()
class GNTPNotice(_GNTPBase):
@@ -379,7 +419,7 @@ class GNTPNotice(_GNTPBase):
for i, part in enumerate(parts):
if i == 0:
continue # Skip Header
continue # Skip Header
if part.strip() == '':
continue
notice = self._parse_dict(part)
@@ -388,21 +428,6 @@ class GNTPNotice(_GNTPBase):
#open('notice.png','wblol').write(notice['Data'])
self.resources[notice.get('Identifier')] = notice
def encode(self):
"""Encode a GNTP Notification Message
:return string: GNTP Notification Message ready to be sent
"""
self.validate()
message = self._format_info() + GNTP_EOL
#Headers
for k, v in self.headers.iteritems():
message += u'%s: %s%s' % (k, v, GNTP_EOL)
message += GNTP_EOL
return message
class GNTPSubscribe(_GNTPBase):
"""Represents a GNTP Subscribe Command
@@ -457,7 +482,8 @@ class GNTPError(_GNTPBase):
self.add_header('Error-Description', errordesc)
def error(self):
return self.headers['Error-Code'], self.headers['Error-Description']
return (self.headers.get('Error-Code', None),
self.headers.get('Error-Description', None))
def parse_gntp(data, password = None):

View File

@@ -22,43 +22,6 @@ __all__ = [
logger = logging.getLogger(__name__)
def mini(description, applicationName = 'PythonMini', noteType = "Message",
title = "Mini Message", applicationIcon = None, hostname = 'localhost',
password = None, port = 23053, sticky = False, priority = None,
callback = None):
"""Single notification function
Simple notification function in one line. Has only one required parameter
and attempts to use reasonable defaults for everything else
:param string description: Notification message
.. warning::
For now, only URL callbacks are supported. In the future, the
callback argument will also support a function
"""
growl = GrowlNotifier(
applicationName = applicationName,
notifications = [noteType],
defaultNotifications = [noteType],
hostname = hostname,
password = password,
port = port,
)
result = growl.register()
if result is not True:
return result
return growl.notify(
noteType = noteType,
title = title,
description = description,
icon = applicationIcon,
sticky = sticky,
priority = priority,
callback = callback,
)
class GrowlNotifier(object):
"""Helper class to simplfy sending Growl messages
@@ -93,10 +56,12 @@ class GrowlNotifier(object):
def _checkIcon(self, data):
'''
Check the icon to see if it's valid
@param data:
@todo Consider checking for a valid URL
If it's a simple URL icon, then we return True. If it's a data icon
then we return False
'''
return data
logger.info('Checking icon')
return data.startswith('http')
def register(self):
"""Send GNTP Registration
@@ -112,7 +77,11 @@ class GrowlNotifier(object):
enabled = notification in self.defaultNotifications
register.add_notification(notification, enabled)
if self.applicationIcon:
register.add_header('Application-Icon', self.applicationIcon)
if self._checkIcon(self.applicationIcon):
register.add_header('Application-Icon', self.applicationIcon)
else:
id = register.add_resource(self.applicationIcon)
register.add_header('Application-Icon', id)
if self.password:
register.set_password(self.password, self.passwordHash)
self.add_origin_info(register)
@@ -120,7 +89,7 @@ class GrowlNotifier(object):
return self._send('register', register)
def notify(self, noteType, title, description, icon = None, sticky = False,
priority = None, callback = None):
priority = None, callback = None, identifier = None):
"""Send a GNTP notifications
.. warning::
@@ -151,11 +120,18 @@ class GrowlNotifier(object):
if priority:
notice.add_header('Notification-Priority', priority)
if icon:
notice.add_header('Notification-Icon', self._checkIcon(icon))
if self._checkIcon(icon):
notice.add_header('Notification-Icon', icon)
else:
id = notice.add_resource(icon)
notice.add_header('Notification-Icon', id)
if description:
notice.add_header('Notification-Text', description)
if callback:
notice.add_header('Notification-Callback-Target', callback)
if identifier:
notice.add_header('Notification-Coalescing-ID', identifier)
self.add_origin_info(notice)
self.notify_hook(notice)
@@ -193,9 +169,10 @@ class GrowlNotifier(object):
def subscribe_hook(self, packet):
pass
def _send(self, type, packet):
def _send(self, messagetype, packet):
"""Send the GNTP Packet"""
packet.validate()
data = packet.encode()
logger.debug('To : %s:%s <%s>\n%s', self.hostname, self.port, packet.__class__, data)
@@ -203,7 +180,7 @@ class GrowlNotifier(object):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.settimeout(self.socketTimeout)
s.connect((self.hostname, self.port))
s.send(data.encode('utf8', 'replace'))
s.send(data)
recv_data = s.recv(1024)
while not recv_data.endswith("\r\n\r\n"):
recv_data += s.recv(1024)
@@ -212,11 +189,51 @@ class GrowlNotifier(object):
logger.debug('From : %s:%s <%s>\n%s', self.hostname, self.port, response.__class__, response)
if response.info['messagetype'] == '-OK':
if type(response) == gntp.GNTPOK:
return True
logger.error('Invalid response: %s', response.error())
return response.error()
def mini(description, applicationName = 'PythonMini', noteType = "Message",
title = "Mini Message", applicationIcon = None, hostname = 'localhost',
password = None, port = 23053, sticky = False, priority = None,
callback = None, notificationIcon = None, identifier = None,
notifierFactory = GrowlNotifier):
"""Single notification function
Simple notification function in one line. Has only one required parameter
and attempts to use reasonable defaults for everything else
:param string description: Notification message
.. warning::
For now, only URL callbacks are supported. In the future, the
callback argument will also support a function
"""
growl = notifierFactory(
applicationName = applicationName,
notifications = [noteType],
defaultNotifications = [noteType],
applicationIcon = applicationIcon,
hostname = hostname,
password = password,
port = port,
)
result = growl.register()
if result is not True:
return result
return growl.notify(
noteType = noteType,
title = title,
description = description,
icon = notificationIcon,
sticky = sticky,
priority = priority,
callback = callback,
identifier = identifier,
)
if __name__ == '__main__':
# If we're running this module directly we're likely running it as a test
# so extra debugging is useful

366
libs/six.py Normal file
View File

@@ -0,0 +1,366 @@
"""Utilities for writing code that runs on Python 2 and 3"""
import operator
import sys
import types
__author__ = "Benjamin Peterson <benjamin@python.org>"
__version__ = "1.2.0"
# True if we are running on Python 3.
PY3 = sys.version_info[0] == 3
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
MAXSIZE = sys.maxsize
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
if sys.platform == "java":
# Jython always uses 32 bits.
MAXSIZE = int((1 << 31) - 1)
else:
# It's possible to have sizeof(long) != sizeof(Py_ssize_t).
class X(object):
def __len__(self):
return 1 << 31
try:
len(X())
except OverflowError:
# 32-bit
MAXSIZE = int((1 << 31) - 1)
else:
# 64-bit
MAXSIZE = int((1 << 63) - 1)
del X
def _add_doc(func, doc):
"""Add documentation to a function."""
func.__doc__ = doc
def _import_module(name):
"""Import module, returning the module after the last dot."""
__import__(name)
return sys.modules[name]
class _LazyDescr(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, tp):
result = self._resolve()
setattr(obj, self.name, result)
# This is a bit ugly, but it avoids running this again.
delattr(tp, self.name)
return result
class MovedModule(_LazyDescr):
def __init__(self, name, old, new=None):
super(MovedModule, self).__init__(name)
if PY3:
if new is None:
new = name
self.mod = new
else:
self.mod = old
def _resolve(self):
return _import_module(self.mod)
class MovedAttribute(_LazyDescr):
def __init__(self, name, old_mod, new_mod, old_attr=None, new_attr=None):
super(MovedAttribute, self).__init__(name)
if PY3:
if new_mod is None:
new_mod = name
self.mod = new_mod
if new_attr is None:
if old_attr is None:
new_attr = name
else:
new_attr = old_attr
self.attr = new_attr
else:
self.mod = old_mod
if old_attr is None:
old_attr = name
self.attr = old_attr
def _resolve(self):
module = _import_module(self.mod)
return getattr(module, self.attr)
class _MovedItems(types.ModuleType):
"""Lazy loading of moved objects"""
_moved_attributes = [
MovedAttribute("cStringIO", "cStringIO", "io", "StringIO"),
MovedAttribute("filter", "itertools", "builtins", "ifilter", "filter"),
MovedAttribute("input", "__builtin__", "builtins", "raw_input", "input"),
MovedAttribute("map", "itertools", "builtins", "imap", "map"),
MovedAttribute("reload_module", "__builtin__", "imp", "reload"),
MovedAttribute("reduce", "__builtin__", "functools"),
MovedAttribute("StringIO", "StringIO", "io"),
MovedAttribute("xrange", "__builtin__", "builtins", "xrange", "range"),
MovedAttribute("zip", "itertools", "builtins", "izip", "zip"),
MovedModule("builtins", "__builtin__"),
MovedModule("configparser", "ConfigParser"),
MovedModule("copyreg", "copy_reg"),
MovedModule("http_cookiejar", "cookielib", "http.cookiejar"),
MovedModule("http_cookies", "Cookie", "http.cookies"),
MovedModule("html_entities", "htmlentitydefs", "html.entities"),
MovedModule("html_parser", "HTMLParser", "html.parser"),
MovedModule("http_client", "httplib", "http.client"),
MovedModule("BaseHTTPServer", "BaseHTTPServer", "http.server"),
MovedModule("CGIHTTPServer", "CGIHTTPServer", "http.server"),
MovedModule("SimpleHTTPServer", "SimpleHTTPServer", "http.server"),
MovedModule("cPickle", "cPickle", "pickle"),
MovedModule("queue", "Queue"),
MovedModule("reprlib", "repr"),
MovedModule("socketserver", "SocketServer"),
MovedModule("tkinter", "Tkinter"),
MovedModule("tkinter_dialog", "Dialog", "tkinter.dialog"),
MovedModule("tkinter_filedialog", "FileDialog", "tkinter.filedialog"),
MovedModule("tkinter_scrolledtext", "ScrolledText", "tkinter.scrolledtext"),
MovedModule("tkinter_simpledialog", "SimpleDialog", "tkinter.simpledialog"),
MovedModule("tkinter_tix", "Tix", "tkinter.tix"),
MovedModule("tkinter_constants", "Tkconstants", "tkinter.constants"),
MovedModule("tkinter_dnd", "Tkdnd", "tkinter.dnd"),
MovedModule("tkinter_colorchooser", "tkColorChooser",
"tkinter.colorchooser"),
MovedModule("tkinter_commondialog", "tkCommonDialog",
"tkinter.commondialog"),
MovedModule("tkinter_tkfiledialog", "tkFileDialog", "tkinter.filedialog"),
MovedModule("tkinter_font", "tkFont", "tkinter.font"),
MovedModule("tkinter_messagebox", "tkMessageBox", "tkinter.messagebox"),
MovedModule("tkinter_tksimpledialog", "tkSimpleDialog",
"tkinter.simpledialog"),
MovedModule("urllib_robotparser", "robotparser", "urllib.robotparser"),
MovedModule("winreg", "_winreg"),
]
for attr in _moved_attributes:
setattr(_MovedItems, attr.name, attr)
del attr
moves = sys.modules["six.moves"] = _MovedItems("moves")
def add_move(move):
"""Add an item to six.moves."""
setattr(_MovedItems, move.name, move)
def remove_move(name):
"""Remove item from six.moves."""
try:
delattr(_MovedItems, name)
except AttributeError:
try:
del moves.__dict__[name]
except KeyError:
raise AttributeError("no such move, %r" % (name,))
if PY3:
_meth_func = "__func__"
_meth_self = "__self__"
_func_code = "__code__"
_func_defaults = "__defaults__"
_iterkeys = "keys"
_itervalues = "values"
_iteritems = "items"
else:
_meth_func = "im_func"
_meth_self = "im_self"
_func_code = "func_code"
_func_defaults = "func_defaults"
_iterkeys = "iterkeys"
_itervalues = "itervalues"
_iteritems = "iteritems"
try:
advance_iterator = next
except NameError:
def advance_iterator(it):
return it.next()
next = advance_iterator
if PY3:
def get_unbound_function(unbound):
return unbound
Iterator = object
def callable(obj):
return any("__call__" in klass.__dict__ for klass in type(obj).__mro__)
else:
def get_unbound_function(unbound):
return unbound.im_func
class Iterator(object):
def next(self):
return type(self).__next__(self)
callable = callable
_add_doc(get_unbound_function,
"""Get the function out of a possibly unbound function""")
get_method_function = operator.attrgetter(_meth_func)
get_method_self = operator.attrgetter(_meth_self)
get_function_code = operator.attrgetter(_func_code)
get_function_defaults = operator.attrgetter(_func_defaults)
def iterkeys(d):
"""Return an iterator over the keys of a dictionary."""
return iter(getattr(d, _iterkeys)())
def itervalues(d):
"""Return an iterator over the values of a dictionary."""
return iter(getattr(d, _itervalues)())
def iteritems(d):
"""Return an iterator over the (key, value) pairs of a dictionary."""
return iter(getattr(d, _iteritems)())
if PY3:
def b(s):
return s.encode("latin-1")
def u(s):
return s
if sys.version_info[1] <= 1:
def int2byte(i):
return bytes((i,))
else:
# This is about 2x faster than the implementation above on 3.2+
int2byte = operator.methodcaller("to_bytes", 1, "big")
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
else:
def b(s):
return s
def u(s):
return unicode(s, "unicode_escape")
int2byte = chr
import StringIO
StringIO = BytesIO = StringIO.StringIO
_add_doc(b, """Byte literal""")
_add_doc(u, """Text literal""")
if PY3:
import builtins
exec_ = getattr(builtins, "exec")
def reraise(tp, value, tb=None):
if value.__traceback__ is not tb:
raise value.with_traceback(tb)
raise value
print_ = getattr(builtins, "print")
del builtins
else:
def exec_(code, globs=None, locs=None):
"""Execute code in a namespace."""
if globs is None:
frame = sys._getframe(1)
globs = frame.f_globals
if locs is None:
locs = frame.f_locals
del frame
elif locs is None:
locs = globs
exec("""exec code in globs, locs""")
exec_("""def reraise(tp, value, tb=None):
raise tp, value, tb
""")
def print_(*args, **kwargs):
"""The new-style print function."""
fp = kwargs.pop("file", sys.stdout)
if fp is None:
return
def write(data):
if not isinstance(data, basestring):
data = str(data)
fp.write(data)
want_unicode = False
sep = kwargs.pop("sep", None)
if sep is not None:
if isinstance(sep, unicode):
want_unicode = True
elif not isinstance(sep, str):
raise TypeError("sep must be None or a string")
end = kwargs.pop("end", None)
if end is not None:
if isinstance(end, unicode):
want_unicode = True
elif not isinstance(end, str):
raise TypeError("end must be None or a string")
if kwargs:
raise TypeError("invalid keyword arguments to print()")
if not want_unicode:
for arg in args:
if isinstance(arg, unicode):
want_unicode = True
break
if want_unicode:
newline = unicode("\n")
space = unicode(" ")
else:
newline = "\n"
space = " "
if sep is None:
sep = space
if end is None:
end = newline
for i, arg in enumerate(args):
if i:
write(sep)
write(arg)
write(end)
_add_doc(reraise, """Reraise an exception.""")
def with_metaclass(meta, base=object):
"""Create a base class with a metaclass."""
return meta("NewBase", (base,), {})

View File

@@ -96,17 +96,18 @@ class CurlAsyncHTTPClient(AsyncHTTPClient):
pycurl.POLL_INOUT: ioloop.IOLoop.READ | ioloop.IOLoop.WRITE
}
if event == pycurl.POLL_REMOVE:
self.io_loop.remove_handler(fd)
del self._fds[fd]
if fd in self._fds:
self.io_loop.remove_handler(fd)
del self._fds[fd]
else:
ioloop_event = event_map[event]
if fd not in self._fds:
self._fds[fd] = ioloop_event
self.io_loop.add_handler(fd, self._handle_events,
ioloop_event)
else:
self._fds[fd] = ioloop_event
else:
self.io_loop.update_handler(fd, ioloop_event)
self._fds[fd] = ioloop_event
def _set_timeout(self, msecs):
"""Called by libcurl to schedule a timeout."""

View File

@@ -194,7 +194,7 @@ class IOLoop(Configurable):
def initialize(self):
pass
def close(self, all_fds=False):
def close(self, all_fds = False):
"""Closes the IOLoop, freeing any resources used.
If ``all_fds`` is true, all file descriptors registered on the
@@ -320,7 +320,7 @@ class IOLoop(Configurable):
"""
raise NotImplementedError()
def add_callback(self, callback):
def add_callback(self, callback, *args, **kwargs):
"""Calls the given callback on the next I/O loop iteration.
It is safe to call this method from any thread at any time,
@@ -335,7 +335,7 @@ class IOLoop(Configurable):
"""
raise NotImplementedError()
def add_callback_from_signal(self, callback):
def add_callback_from_signal(self, callback, *args, **kwargs):
"""Calls the given callback on the next I/O loop iteration.
Safe for use from a Python signal handler; should not be used
@@ -359,8 +359,7 @@ class IOLoop(Configurable):
assert isinstance(future, IOLoop._FUTURE_TYPES)
callback = stack_context.wrap(callback)
future.add_done_callback(
lambda future: self.add_callback(
functools.partial(callback, future)))
lambda future: self.add_callback(callback, future))
def _run_callback(self, callback):
"""Runs a callback with error handling.
@@ -382,7 +381,7 @@ class IOLoop(Configurable):
The exception itself is not passed explicitly, but is available
in sys.exc_info.
"""
app_log.error("Exception in callback %r", callback, exc_info=True)
app_log.error("Exception in callback %r", callback, exc_info = True)
@@ -393,7 +392,7 @@ class PollIOLoop(IOLoop):
(Linux), `tornado.platform.kqueue.KQueueIOLoop` (BSD and Mac), or
`tornado.platform.select.SelectIOLoop` (all platforms).
"""
def initialize(self, impl, time_func=None):
def initialize(self, impl, time_func = None):
super(PollIOLoop, self).initialize()
self._impl = impl
if hasattr(self._impl, 'fileno'):
@@ -417,7 +416,7 @@ class PollIOLoop(IOLoop):
lambda fd, events: self._waker.consume(),
self.READ)
def close(self, all_fds=False):
def close(self, all_fds = False):
with self._callback_lock:
self._closing = True
self.remove_handler(self._waker.fileno())
@@ -426,7 +425,7 @@ class PollIOLoop(IOLoop):
try:
os.close(fd)
except Exception:
gen_log.debug("error closing fd %s", fd, exc_info=True)
gen_log.debug("error closing fd %s", fd, exc_info = True)
self._waker.close()
self._impl.close()
@@ -442,8 +441,8 @@ class PollIOLoop(IOLoop):
self._events.pop(fd, None)
try:
self._impl.unregister(fd)
except (OSError, IOError):
gen_log.debug("Error deleting fd from IOLoop", exc_info=True)
except Exception:
gen_log.debug("Error deleting fd from IOLoop", exc_info = True)
def set_blocking_signal_threshold(self, seconds, action):
if not hasattr(signal, "setitimer"):
@@ -501,7 +500,7 @@ class PollIOLoop(IOLoop):
# IOLoop is just started once at the beginning.
signal.set_wakeup_fd(old_wakeup_fd)
old_wakeup_fd = None
except ValueError: # non-main thread
except ValueError: # non-main thread
pass
while True:
@@ -569,17 +568,18 @@ class PollIOLoop(IOLoop):
while self._events:
fd, events = self._events.popitem()
try:
self._handlers[fd](fd, events)
hdlr = self._handlers.get(fd)
if hdlr: hdlr(fd, events)
except (OSError, IOError), e:
if e.args[0] == errno.EPIPE:
# Happens when the client closes the connection
pass
else:
app_log.error("Exception in I/O handler for fd %s",
fd, exc_info=True)
fd, exc_info = True)
except Exception:
app_log.error("Exception in I/O handler for fd %s",
fd, exc_info=True)
fd, exc_info = True)
# reset the stopped flag so another start/stop pair can be issued
self._stopped = False
if self._blocking_signal_threshold is not None:
@@ -609,12 +609,13 @@ class PollIOLoop(IOLoop):
# collection pass whenever there are too many dead timeouts.
timeout.callback = None
def add_callback(self, callback):
def add_callback(self, callback, *args, **kwargs):
with self._callback_lock:
if self._closing:
raise RuntimeError("IOLoop is closing")
list_empty = not self._callbacks
self._callbacks.append(stack_context.wrap(callback))
self._callbacks.append(functools.partial(
stack_context.wrap(callback), *args, **kwargs))
if list_empty and thread.get_ident() != self._thread_ident:
# If we're in the IOLoop's thread, we know it's not currently
# polling. If we're not, and we added the first callback to an
@@ -624,12 +625,12 @@ class PollIOLoop(IOLoop):
# avoid it when we can.
self._waker.wake()
def add_callback_from_signal(self, callback):
def add_callback_from_signal(self, callback, *args, **kwargs):
with stack_context.NullContext():
if thread.get_ident() != self._thread_ident:
# if the signal is handled on another thread, we can add
# it normally (modulo the NullContext)
self.add_callback(callback)
self.add_callback(callback, *args, **kwargs)
else:
# If we're on the IOLoop's thread, we cannot use
# the regular add_callback because it may deadlock on
@@ -639,7 +640,8 @@ class PollIOLoop(IOLoop):
# _callback_lock block in IOLoop.start, we may modify
# either the old or new version of self._callbacks,
# but either way will work.
self._callbacks.append(stack_context.wrap(callback))
self._callbacks.append(functools.partial(
stack_context.wrap(callback), *args, **kwargs))
class _Timeout(object):
@@ -682,7 +684,7 @@ class PeriodicCallback(object):
`start` must be called after the PeriodicCallback is created.
"""
def __init__(self, callback, callback_time, io_loop=None):
def __init__(self, callback, callback_time, io_loop = None):
self.callback = callback
if callback_time <= 0:
raise ValueError("Periodic callback must have a positive callback_time")
@@ -710,7 +712,7 @@ class PeriodicCallback(object):
try:
self.callback()
except Exception:
app_log.error("Error in periodic callback", exc_info=True)
app_log.error("Error in periodic callback", exc_info = True)
self._schedule_next()
def _schedule_next(self):

View File

@@ -209,11 +209,19 @@ class BaseIOStream(object):
"""Call the given callback when the stream is closed."""
self._close_callback = stack_context.wrap(callback)
def close(self):
"""Close this stream."""
def close(self, exc_info=False):
"""Close this stream.
If ``exc_info`` is true, set the ``error`` attribute to the current
exception from `sys.exc_info()` (or if ``exc_info`` is a tuple,
use that instead of `sys.exc_info`).
"""
if not self.closed():
if any(sys.exc_info()):
self.error = sys.exc_info()[1]
if exc_info:
if not isinstance(exc_info, tuple):
exc_info = sys.exc_info()
if any(exc_info):
self.error = exc_info[1]
if self._read_until_close:
callback = self._read_callback
self._read_callback = None
@@ -285,7 +293,7 @@ class BaseIOStream(object):
except Exception:
gen_log.error("Uncaught exception, closing connection.",
exc_info=True)
self.close()
self.close(exc_info=True)
raise
def _run_callback(self, callback, *args):
@@ -300,7 +308,7 @@ class BaseIOStream(object):
# (It would eventually get closed when the socket object is
# gc'd, but we don't want to rely on gc happening before we
# run out of file descriptors)
self.close()
self.close(exc_info=True)
# Re-raise the exception so that IOLoop.handle_callback_exception
# can see it and log the error
raise
@@ -348,7 +356,7 @@ class BaseIOStream(object):
self._pending_callbacks -= 1
except Exception:
gen_log.warning("error on read", exc_info=True)
self.close()
self.close(exc_info=True)
return
if self._read_from_buffer():
return
@@ -397,9 +405,9 @@ class BaseIOStream(object):
# Treat ECONNRESET as a connection close rather than
# an error to minimize log spam (the exception will
# be available on self.error for apps that care).
self.close()
self.close(exc_info=True)
return
self.close()
self.close(exc_info=True)
raise
if chunk is None:
return 0
@@ -503,7 +511,7 @@ class BaseIOStream(object):
else:
gen_log.warning("Write error on %d: %s",
self.fileno(), e)
self.close()
self.close(exc_info=True)
return
if not self._write_buffer and self._write_callback:
callback = self._write_callback
@@ -664,7 +672,7 @@ class IOStream(BaseIOStream):
if e.args[0] not in (errno.EINPROGRESS, errno.EWOULDBLOCK):
gen_log.warning("Connect error on fd %d: %s",
self.socket.fileno(), e)
self.close()
self.close(exc_info=True)
return
self._connect_callback = stack_context.wrap(callback)
self._add_io_state(self.io_loop.WRITE)
@@ -733,7 +741,7 @@ class SSLIOStream(IOStream):
return
elif err.args[0] in (ssl.SSL_ERROR_EOF,
ssl.SSL_ERROR_ZERO_RETURN):
return self.close()
return self.close(exc_info=True)
elif err.args[0] == ssl.SSL_ERROR_SSL:
try:
peer = self.socket.getpeername()
@@ -741,11 +749,11 @@ class SSLIOStream(IOStream):
peer = '(not connected)'
gen_log.warning("SSL Error on %d %s: %s",
self.socket.fileno(), peer, err)
return self.close()
return self.close(exc_info=True)
raise
except socket.error, err:
if err.args[0] in (errno.ECONNABORTED, errno.ECONNRESET):
return self.close()
return self.close(exc_info=True)
else:
self._ssl_accepting = False
if self._ssl_connect_callback is not None:
@@ -842,7 +850,7 @@ class PipeIOStream(BaseIOStream):
elif e.args[0] == errno.EBADF:
# If the writing half of a pipe is closed, select will
# report it as readable but reads will fail with EBADF.
self.close()
self.close(exc_info=True)
return None
else:
raise

View File

@@ -431,6 +431,8 @@ class TwistedIOLoop(tornado.ioloop.IOLoop):
self.reactor.removeWriter(self.fds[fd])
def remove_handler(self, fd):
if fd not in self.fds:
return
self.fds[fd].lost = True
if self.fds[fd].reading:
self.reactor.removeReader(self.fds[fd])
@@ -444,6 +446,12 @@ class TwistedIOLoop(tornado.ioloop.IOLoop):
def stop(self):
self.reactor.crash()
def _run_callback(self, callback, *args, **kwargs):
try:
callback(*args, **kwargs)
except Exception:
self.handle_callback_exception(callback)
def add_timeout(self, deadline, callback):
if isinstance(deadline, (int, long, float)):
delay = max(deadline - self.time(), 0)
@@ -451,13 +459,14 @@ class TwistedIOLoop(tornado.ioloop.IOLoop):
delay = deadline.total_seconds()
else:
raise TypeError("Unsupported deadline %r")
return self.reactor.callLater(delay, wrap(callback))
return self.reactor.callLater(delay, self._run_callback, wrap(callback))
def remove_timeout(self, timeout):
timeout.cancel()
def add_callback(self, callback):
self.reactor.callFromThread(wrap(callback))
def add_callback(self, callback, *args, **kwargs):
self.reactor.callFromThread(self._run_callback,
wrap(callback), *args, **kwargs)
def add_callback_from_signal(self, callback):
self.add_callback(callback)
def add_callback_from_signal(self, callback, *args, **kwargs):
self.add_callback(callback, *args, **kwargs)

View File

@@ -268,7 +268,7 @@ class Subprocess(object):
assert ret_pid == pid
subproc = cls._waiting.pop(pid)
subproc.io_loop.add_callback_from_signal(
functools.partial(subproc._set_returncode, status))
subproc._set_returncode, status)
def _set_returncode(self, status):
if os.WIFSIGNALED(status):

View File

@@ -12,7 +12,6 @@ from tornado.util import b, GzipDecompressor
import base64
import collections
import contextlib
import copy
import functools
import os.path
@@ -134,7 +133,7 @@ class _HTTPConnection(object):
self._decompressor = None
# Timeout handle returned by IOLoop.add_timeout
self._timeout = None
with stack_context.StackContext(self.cleanup):
with stack_context.ExceptionStackContext(self._handle_exception):
self.parsed = urlparse.urlsplit(_unicode(self.request.url))
if ssl is None and self.parsed.scheme == "https":
raise ValueError("HTTPS requires either python2.6+ or "
@@ -309,19 +308,24 @@ class _HTTPConnection(object):
if self.final_callback is not None:
final_callback = self.final_callback
self.final_callback = None
final_callback(response)
self.io_loop.add_callback(final_callback, response)
@contextlib.contextmanager
def cleanup(self):
try:
yield
except Exception, e:
gen_log.warning("uncaught exception", exc_info=True)
self._run_callback(HTTPResponse(self.request, 599, error=e,
def _handle_exception(self, typ, value, tb):
if self.final_callback:
gen_log.warning("uncaught exception", exc_info=(typ, value, tb))
self._run_callback(HTTPResponse(self.request, 599, error=value,
request_time=self.io_loop.time() - self.start_time,
))
if hasattr(self, "stream"):
self.stream.close()
return True
else:
# If our callback has already been called, we are probably
# catching an exception that is not caused by us but rather
# some child of our callback. Rather than drop it on the floor,
# pass it along.
return False
def _on_close(self):
if self.final_callback is not None:

View File

@@ -36,9 +36,8 @@ except ImportError:
netutil = None
SimpleAsyncHTTPClient = None
from tornado.log import gen_log
from tornado.stack_context import StackContext
from tornado.stack_context import ExceptionStackContext
from tornado.util import raise_exc_info
import contextlib
import logging
import os
import re
@@ -167,13 +166,10 @@ class AsyncTestCase(unittest.TestCase):
'''
return IOLoop()
@contextlib.contextmanager
def _stack_context(self):
try:
yield
except Exception:
self.__failure = sys.exc_info()
self.stop()
def _handle_exception(self, typ, value, tb):
self.__failure = sys.exc_info()
self.stop()
return True
def __rethrow(self):
if self.__failure is not None:
@@ -182,7 +178,7 @@ class AsyncTestCase(unittest.TestCase):
raise_exc_info(failure)
def run(self, result=None):
with StackContext(self._stack_context):
with ExceptionStackContext(self._handle_exception):
super(AsyncTestCase, self).run(result)
# In case an exception escaped super.run or the StackContext caught
# an exception when there wasn't a wait() to re-raise it, do so here.

Some files were not shown because too many files have changed in this diff Show More