Compare commits

...

205 Commits

Author SHA1 Message Date
Ruud
51e747049d One up 2013-01-07 23:10:42 +01:00
Ruud
0582f7d694 Urlencode spotweb id. fix #1213 2013-01-07 23:10:06 +01:00
Ruud
fa7cac7538 Merge branch 'refs/heads/develop' into desktop 2013-01-07 22:41:55 +01:00
Ruud
ec857a9b3d FTDWorld: Check for login success 2013-01-07 22:31:42 +01:00
Ruud
4d32b0b16d Use FTDWorld temp api. closes #1243 2013-01-07 22:21:44 +01:00
Ruud
ca08287cff Ignore Growl timeout. fixes #1240 2013-01-07 20:54:21 +01:00
Ruud
36fee69843 XBMC notifier for Frodo & Eden 2013-01-06 23:06:38 +01:00
ikkemaniac
c5cae5ab9b add XBMC v11 Eden notifications support
This is my approach on working with Eden, maybe a little late since Frodo is almost released, but better late then never.

- First detect the JSON-RPC version XBMC is running (once per boot of CouchPotatoServer on the first notification, except for sending test message then the JSON version is always checked).
- Set a variable indicating whether or not to use JSON (or normal http).
- If JSON should be used, proceed as before this commit.
- If normal-http should be used, use 'notifyXBMCnoJSON' func
- 'notifyXBMCnoJSON' just opens a specific XBMC api url, unfortunately importing urllib for this was necessary to escape the message strings.

TODO: support multiple XBMC hosts, right now the last host in the hosts array will set the 'useJSONnotifications' var.

Conflicts:

	couchpotato/core/notifications/xbmc/main.py

Conflicts:
	couchpotato/core/notifications/xbmc/main.py
2013-01-06 22:52:59 +01:00
ikkemaniac
9bd5688fb9 Remove services that are not required for couchpotato to run 2013-01-06 22:35:35 +01:00
ikkemaniac
1993c2b6cb Redo FreeBSD init script completely.
Use rc.subr functions and proper rc.conf variables.
2013-01-06 22:35:35 +01:00
ikkemaniac
acc8ed2092 Acutally use config_file variable 2013-01-06 22:35:35 +01:00
ikkemaniac
7b4924dd7a Don't influence the PATH variable in FreeBSD rc script
Don't prepend the PATH variable, it's ugly, unwanted and unnecessary. Call binaries with their full path.
2013-01-06 22:35:35 +01:00
ikkemaniac
3a2861f72a fix FreeBSD init script
-add actual start command
-fix verify_couchpotato_pid function, 'ps' command failed if PID var was empty
-fix verify_couchpotato_pid usage, acutally use the return of verify_couchpotato_pid in the 'stop' routine
2013-01-06 22:35:35 +01:00
Ruud
4779265b43 Change xbmc description 2013-01-06 11:52:05 +01:00
ikkemaniac
f8a46ebe6d clearly state XBMC version dependency for notifications 2013-01-06 11:33:53 +01:00
ikkemaniac
383ec7e6f5 check for XBMC JSON-RPC version and improve logging info 2013-01-06 11:33:53 +01:00
Ruud
dd9118292d Newznab log error 2013-01-06 11:20:13 +01:00
Ruud
4d0f8eb4ac Default add top25 to itunes automation 2013-01-04 20:26:36 +01:00
Ruud
637b21cc68 iTunes automation cleanup 2013-01-04 20:20:52 +01:00
Joseph Gardner
da429f0cb8 Adding itunes automation provider 2013-01-04 20:15:30 +01:00
Ruud
41c2845328 Toasty cleanup 2013-01-04 20:14:25 +01:00
Travis La Marr
c2453bb070 Added Windows Phone SuperToasty Notifier 2013-01-04 19:05:58 +01:00
Ruud
a3a2c8da8e Typo 2013-01-04 19:03:36 +01:00
Ruud
a1d4bab793 NZBVortex: Delete failed option 2013-01-02 14:11:40 +01:00
Ruud
d314a9b5b3 Also check status on manual 2013-01-02 14:11:11 +01:00
Ruud
9a60f6001a Check snatched on startup 2013-01-02 14:10:41 +01:00
Ruud
96a39dbf60 Link to downloaders 2013-01-02 13:52:59 +01:00
Ruud
015675750c Properly use imdb_results 2013-01-02 13:43:24 +01:00
Ruud
bf4dc62f54 NZBVortex support. closes #1204 2013-01-02 13:31:14 +01:00
Ruud
c2382ade05 Use provider for downloader 2013-01-02 13:29:44 +01:00
Ruud
2f65545086 Extend opener with multipart 2013-01-02 13:29:28 +01:00
Ruud
3aea2cd968 Simpler CP tag regex 2013-01-02 13:29:08 +01:00
Ruud
f30cb9185c Add nzbgeek to the defaults of newznab 2013-01-02 10:40:08 +01:00
Ruud
615468e8e6 Make nzbget password a password field. closes #1205 2012-12-31 21:31:11 +01:00
Ruud
0cbee01024 Don't use unicode when not needed in urlopen 2012-12-31 13:10:11 +01:00
Ruud
c29cb39797 Automation cleanup 2012-12-30 21:43:13 +01:00
Kris Kater
580ff38136 Added moviemeter.nl automation 2012-12-30 20:51:47 +01:00
Sander Boele
6b8bca5491 added path to the freebsd init script 2012-12-30 20:51:24 +01:00
Ruud
e92b5d95ca IOLoop cleanup 2012-12-30 18:40:38 +01:00
Ruud
611a32d110 Add randomstring to each internal api 2012-12-30 18:39:16 +01:00
Ruud
74e4b015a9 Module update: Tornade 2012-12-30 18:38:52 +01:00
Ruud
1e0267cdb5 Change OMGWTF to .org 2012-12-30 11:21:23 +01:00
Ruud
041a206fb4 Rename to OMDBapi 2012-12-29 23:45:21 +01:00
Ruud
12a4d6a995 Send proper user-agent with nzbx.co 2012-12-29 21:23:15 +01:00
Ruud
b14a6c1e63 nzbx description 2012-12-29 20:22:42 +01:00
spion06
7fa08ef9b6 Update init/freebsd to not use perl
When using couchpotato on slimmed down versions of freebsd (freenas for example) sometimes perl is not available. Since the previous parsing of the INI required a "key = value" format it is pretty simple to use awk for this.
2012-12-29 19:24:12 +01:00
Ruud
9a314cfbc4 One up 2012-12-29 00:03:45 +01:00
Ruud
5941d0bf77 Add version to update url 2012-12-29 00:03:36 +01:00
Ruud
d326c1c25c Merge branch 'refs/heads/master' into desktop
Conflicts:
	version.py
2012-12-28 23:31:08 +01:00
Ruud
7e6234298d Merge branch 'refs/heads/develop' 2012-12-28 23:25:40 +01:00
Ruud
5cf4b8b4d3 Binsearch provider 2012-12-28 23:01:35 +01:00
Ruud
6e56072250 Don't migrate when in development 2012-12-28 17:44:02 +01:00
Ruud
917c5552a4 Simplified providers 2012-12-27 19:53:12 +01:00
Ruud
73c5b90232 TorrentDay support. closes #1161 2012-12-25 18:38:37 +01:00
Ruud
fd53ba0637 SceneAccess: Don't add quality to query 2012-12-25 17:43:47 +01:00
Ruud
0ef3906b3d Cleanup 2012-12-25 17:15:09 +01:00
Ruud
5ab0d7a97b Cleanup torrent providers 2012-12-25 14:56:27 +01:00
Ruud
dbbbbb2f84 Module update: dateutil 2012-12-25 11:38:49 +01:00
Ruud
1bfe948a45 Newznab didn't return results 2012-12-24 19:21:43 +01:00
Ruud
0d2dcff7f0 NZB Provider cleanup 2012-12-23 02:48:36 +01:00
Ruud
d4da206f93 Merge branch 'refs/heads/develop' 2012-12-22 16:33:47 +01:00
Ruud
439cda8b63 Newznab age wrong. fix #1171 2012-12-22 16:33:28 +01:00
Ruud
bbe8362b08 Show updating screen instantly. closes #1167 2012-12-21 23:54:41 +01:00
Ruud
985a168724 Merge branch 'refs/heads/develop' 2012-12-21 23:18:00 +01:00
Ruud
5e6aea97f7 Score providers 2012-12-21 23:17:50 +01:00
Ruud
6c7c4c7aba Use same api call for all qualities. closes #1164 2012-12-21 23:17:42 +01:00
Ruud
e2f59f5ff4 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2012-12-21 22:15:18 +01:00
Ruud
b225980ce7 Use pubDate and enclosure length for newznab 2012-12-21 22:14:37 +01:00
Ruud
b8e86b378f NZBx provider 2012-12-20 15:45:04 +01:00
Ruud
031a186d71 NZBx fixes 2012-12-20 15:19:40 +01:00
Ruud
3c04eed218 Added nzbx option 2012-12-20 15:18:46 +01:00
Ruud
17e01689d9 Remove torrage.ws. fix #1157 2012-12-19 16:13:40 +01:00
Ruud
173c6194ed Merge branch 'refs/heads/develop' 2012-12-19 11:12:26 +01:00
Ruud
95c2e992b0 Use trailer naming from settings. closes #936 2012-12-19 11:11:12 +01:00
Ruud
4bffb299af Catch urlerrors. closes #1154 2012-12-19 08:01:35 +01:00
Ruud
a2c4119508 Change PublicHD to .se TLD 2012-12-18 23:36:13 +01:00
Ruud
4e9472f8ee Encode path properly before using it in walk. close #978 2012-12-18 23:18:54 +01:00
Ruud
f7911fe9f3 Remove release on new scan 2012-12-18 14:14:06 +01:00
Ruud
8ffa6a8392 Quality by id 2012-12-18 13:54:13 +01:00
Ruud
382d49f895 Delete release if it has no files 2012-12-17 22:41:56 +01:00
Ruud
570b79a67e Use height with margin to check quality. fix #582 2012-12-17 22:27:07 +01:00
Ruud
e7aafc406f Check if identifier exists before adding release. fix #1048 2012-12-17 21:10:30 +01:00
Ruud
2dcc1e096e Make path safe first 2012-12-17 21:10:04 +01:00
Ruud
9f0746a668 Encoding issues. fix #974 2012-12-17 20:50:58 +01:00
Ruud
d9c437bd7f Fix some torrentleech stuff. closes #1149 2012-12-17 19:55:27 +01:00
Ruud
7079647f87 Also try and find movie name between [] 2012-12-17 18:49:37 +01:00
Ruud
65570ba479 Improve name searching. closes #1137 2012-12-17 18:22:12 +01:00
Ruud
a57ba9026d Year match only 1900-2099 2012-12-17 18:21:15 +01:00
Ruud
63246256ee Don't remove stuff from python cache 2012-12-17 17:10:53 +01:00
Ruud
1ac0dc3bbf Don't show Environment vars when developing 2012-12-17 16:40:15 +01:00
Ruud
bcd23ad10c Merge branch 'refs/heads/develop' 2012-12-17 15:13:00 +01:00
Ruud
342d31b48a Remove ignored words which are part of title. close #1123 2012-12-17 14:08:44 +01:00
Ruud
ea7904ed9a Typo on seeders check. fix #1142 2012-12-17 13:56:11 +01:00
Ruud
ca37c2f018 Merge branch 'develop-renamer' of https://github.com/clinton-hall/CouchPotatoServer into develop 2012-12-17 13:18:23 +01:00
Ruud
5aa2146614 OMGWTFNZBs support. closes #1130
ZOMG BBQ SAUCAGES NOMNOMNOM
2012-12-17 13:07:01 +01:00
Ruud
0fd49a2c67 FTDWorld returned wrong backup category 2012-12-17 13:03:33 +01:00
Ruud
b680d84cba Don't use handler when in desktop build 2012-12-17 12:00:42 +01:00
Ruud
898e6f487d Merge branch 'refs/heads/develop' 2012-12-16 23:52:06 +01:00
Ruud
96472a9a8f One up 2012-12-16 23:51:58 +01:00
Ruud
27252561e2 Merge branch 'refs/heads/develop' into desktop 2012-12-16 23:51:24 +01:00
Ruud
24b341005e Extended Newznab description 2012-12-16 22:07:02 +01:00
Ruud
749cf550ec FTDWorld provider 2012-12-16 17:37:35 +01:00
Ruud
650177803b Movie login to yarrprovider class 2012-12-16 17:34:50 +01:00
clinton-hall
bb7b4cbbed Added try: except for two common errors
Does not fix the errors, but prevents the renamer being stuck as "in progress"
Allows next instance to run.
2012-12-13 19:45:13 -08:00
Ruud
6618c3927c Merge branch 'refs/heads/develop' 2012-12-11 23:15:06 +01:00
Ruud
003db92c9b Label depending on list. closes #1131 2012-12-11 22:22:31 +01:00
Arsecroft
b2b396bf17 Fixed PtP provider login as per link included
http://passthepopcorn.me/forums.php?action=viewthread&threadid=15602&post=68#post535328
2012-12-11 21:49:48 +01:00
Ruud
f1a1db8d5b Revert "Remove NZBsRus"
This reverts commit f515cd2477.
2012-12-09 21:50:07 +01:00
Ruud
f515cd2477 Remove NZBsRus 2012-12-09 16:21:36 +01:00
Ruud
65bb1bec27 Remove nzbmatrix 2012-12-09 15:49:47 +01:00
Ruud
cc84532824 Typo 2012-12-09 15:42:15 +01:00
clinton-hall
5530fbf792 Remove ignore words. URL too long
Most people are experiencing 401 errors with nzbclub
2012-12-05 22:07:33 +01:00
Ruud
5658a85f61 Use splitstring when possible. 2012-12-04 23:18:13 +01:00
Ruud
0c5206f01b Placeholder for settings 2012-12-04 23:04:48 +01:00
clinton-hall
4bffce637e Fixed double-up in wording 2012-12-04 22:49:20 +01:00
clinton-hall
9f2941a45c Changed description for required words 2012-12-04 22:49:20 +01:00
clinton-hall
f452106bfc Added comparison of sets of required words
Can now match sets of words containing words separated by & with each set separated by ,
This only requires any set of word(s) to be matched, but within each set ALL words must match.
2012-12-04 22:49:20 +01:00
Ruud
da3055be30 Return nothing on disabled calls. 2012-12-03 14:29:59 +01:00
Ruud
f9b65e7216 Add some better name guessing to searcher. fix #1091 2012-12-03 14:20:20 +01:00
Ruud
07e2c56095 Create empty download folder for Transmission. fix #1055 2012-12-03 13:58:46 +01:00
Ruud
9a6cfe3a21 Wait for wasn't used. fix #1104 2012-12-03 13:55:02 +01:00
Ruud
802338a934 Update Tornado lib 2012-12-03 13:31:34 +01:00
Ruud
f0a3358561 Use debug for Tornado errors 2012-12-03 13:11:08 +01:00
Ruud
1c4c69211b Change shutdown event name 2012-12-02 00:18:11 +01:00
Ruud
c9e732651f One up 2012-12-01 12:16:58 +01:00
Ruud
7849e7170d Uninstall only create files, no wildcard *.* 2012-12-01 12:16:51 +01:00
Ruud
087894eb4e Merge branch 'refs/heads/develop' into desktop
Conflicts:
	version.py
2012-12-01 11:50:08 +01:00
Ruud
4b58b40226 Merge branch 'refs/heads/develop' 2012-12-01 11:48:54 +01:00
Ruud
77d57f5a09 Removed stopped providers 2012-11-30 22:49:22 +01:00
Ruud
618845a021 Code formatting 2012-11-30 22:37:23 +01:00
iamnos
3aabcbf8f1 Update init/fedora
The config file should be read after the defaults are set, or the options set in the config file are never used.
2012-11-30 21:53:35 +01:00
Ruud
929c6fe3f9 Don't run schedules when it's 0 2012-11-30 21:42:18 +01:00
Ruud
c852949591 To lower case 2012-11-30 21:36:33 +01:00
Guillaume Bienkowski
e36c8ec3ab Fixed typo 2012-11-30 21:32:20 +01:00
Guillaume BIENKOWSKI
afea12c7c0 Log out when finished 2012-11-30 21:32:20 +01:00
Guillaume BIENKOWSKI
c29a8b47d6 Basic support of Synology DownloadStation API
VERY basic. We're not logging out at the moment, and we keep the
session open.
No verification of the protocol version either (I assume DSM 4.1).

Very basic, indeed
2012-11-30 21:32:20 +01:00
Alexej Haak
fdd0826b4f Added custom categories
less big api requests this way and better results (the cats include
audio quality filter)
2012-11-30 21:29:45 +01:00
Alexej Haak
81b7ebaf51 Kere.ws Implementation from Pheelee
Implementation by Pheelee
he closed the pull request some time ago and I dont know why.. Using
this for some time now with no problems at all
2012-11-30 21:29:45 +01:00
Ruud
eafc3db74d Add HDTS as alternative tag. #1097 2012-11-30 21:11:34 +01:00
Ruud
3464435a5c Imdb url only in XBMC meta data 2012-11-24 16:53:38 +01:00
Randall Ma
9f19902221 fixed typo of "successfully" 2012-11-24 16:35:22 +01:00
Ruud
2ed72c9098 Don't show button bar when there aren't any releases 2012-11-20 21:15:43 +01:00
Ruud
723f720280 Ignore header in TPB results 2012-11-20 21:15:22 +01:00
Ruud
daaa2154e5 Return when no table is returned. fix #1053 2012-11-20 21:07:21 +01:00
Ruud
95c5db2d17 Use css for search animations 2012-11-20 20:36:52 +01:00
Ruud
e53a9ed30a Try next on failed download from provider 2012-11-15 18:30:45 +01:00
Ruud
2b49a4b5d6 Merge branch 'develop' of https://github.com/clinton-hall/CouchPotatoServer into clinton-hall-develop 2012-11-14 13:54:27 +01:00
Ruud
4224a25e54 Sabnzbd, show what is in filedata. 2012-11-14 13:44:26 +01:00
Ruud
3635da1f59 Add new api for upcoming extension 2012-11-14 13:43:38 +01:00
clinton-hall
71cca6b87f Removed excess tags from title
Now only adds "Source", "Video Format", "Audio Format", and "Language"
2012-11-13 18:43:31 -08:00
Ruud
68c0496f8e Don't hide partial keyword log. closes #1043 2012-11-13 20:24:58 +01:00
Ruud
6dc3c8d69d Filmweb.pl userscript. closes #1029
Thanks @kempniu
2012-11-11 22:22:38 +01:00
Ruud
3ecc826629 Merge branch 'refs/heads/develop'
Conflicts:
	version.py
2012-11-11 22:06:48 +01:00
Ruud
b03012e4aa Customizable check snatched. closes #920
Thanks to @clinton-hall
2012-11-11 22:01:19 +01:00
Ruud
5a1f05df8e Transmission, add folder name to download dir 2012-11-11 21:47:02 +01:00
Ruud
62a5909856 Transmission add trackers 2012-11-11 21:03:13 +01:00
Ruud
813c078db0 uTorrent cleanup
Add trackers when adding via magnet_link
2012-11-11 20:45:50 +01:00
Ruud
904d1ea4f7 Merge branch 'utorrent_downloader' of https://github.com/EchelonFour/CouchPotatoServer into EchelonFour-utorrent_downloader 2012-11-11 17:40:14 +01:00
Ruud
20b773bc3b Typos. closes #1022
Thanks to @demonbane
2012-11-11 17:36:42 +01:00
Ruud
be56b96bd0 Try and find groupname more accurately. fix #612 2012-11-11 16:47:00 +01:00
Ruud
655e847aeb Catch renamer.after event errors. fix #1032 2012-11-10 10:10:05 +01:00
Ruud
f3fd0afb42 Only do dvdr match when no quality is found. fix #1030 2012-11-09 17:30:32 +01:00
Ruud
3782ad7f98 Added dksubs to ignore list 2012-11-07 22:26:59 +01:00
Frank Fenton
6f5031fa7c Add torrent file support to utorrent downloader 2012-11-08 03:14:52 +11:00
Frank Fenton
93604a45e5 Add uTorrent downloader 2012-11-07 23:13:44 +11:00
Ruud
28f4169e44 Allow csv file from imdb.com. closes #1017 2012-11-05 17:56:44 +01:00
Ruud
2361057e4c Allow multiple getImdb 2012-11-05 17:55:45 +01:00
Ruud
5caa40bd81 Mapped audio codecs to renamer. closes #993
Thanks @clinton-hall
2012-11-04 00:23:36 +01:00
Ruud
a22bd4abd4 First start improvements 2012-11-03 22:36:25 +01:00
Ruud
a32ba7a763 Disable nzbclub by default 2012-11-03 22:01:13 +01:00
Ruud
5fe645cc11 Return boolean properly 2012-11-03 21:47:37 +01:00
Ruud
f333d85907 Added initscript for ffpstick closes #992
Thanks to @MariusRugan
2012-11-02 18:41:10 +01:00
Ruud
3ec2df5780 Fedora init fix #1009 2012-11-02 18:37:42 +01:00
Ruud
25f1b8c7a7 Fedora init fix #1009 2012-11-02 18:32:15 +01:00
Ruud
e71da1f14d Use proper description for binary build. fix #1005 2012-11-02 18:24:13 +01:00
Ruud
212d64143c Ignore extracting folder. fix #1004 2012-11-02 18:16:03 +01:00
Ruud
51f9b5c673 Clear queu tasks. fix #997 2012-11-02 17:28:39 +01:00
Ruud
2215c000b7 Return everything but None 2012-10-30 22:42:11 +01:00
Ruud
14797249ff NZBClub RSS doesn't support https. fix #991 2012-10-30 20:01:32 +01:00
Ruud
49e2607f5d One up 2012-10-29 21:04:38 +01:00
Ruud
938b14ba18 One up installer 2012-10-29 20:45:17 +01:00
Ruud
c893d5bbb8 import cleanup 2012-10-29 16:36:02 +01:00
Ruud
c4adab69cb Just use basename of __file__ when restarting. fix #982 2012-10-29 16:32:35 +01:00
Ruud
2c9af74f7f Add newznab host to found log 2012-10-29 16:23:55 +01:00
Ruud
7eb15c1a53 Add less important info log 2012-10-29 16:07:18 +01:00
Ruud
a02257a906 Enable both nzb and torrent by default
Use default user download dir and enable blackhole
2012-10-29 16:06:32 +01:00
Ruud
667075a006 Ctrl+c proper shutdown 2012-10-29 11:29:08 +01:00
Ruud
b0f6f9b2ea Update Flask 2012-10-29 10:54:38 +01:00
Ruud
c0900cfe94 Open up browser in readme 2012-10-27 19:14:00 +02:00
bfagundez
24a4810919 Modified the default url to highlight 2012-10-27 19:10:54 +02:00
bfagundez
70b15a5696 Added the url to open after installation, to avoid searching for 10 minutes the default port. 2012-10-27 19:10:54 +02:00
Ruud
32fe3796e4 Merge branch 'refs/heads/develop' 2012-10-26 22:22:47 +02:00
Ruud
359d1aaafa Merge branch 'refs/heads/develop' 2012-10-26 14:54:12 +02:00
Ruud
fb5d336351 Merge branch 'refs/heads/develop' 2012-10-26 14:36:04 +02:00
Ruud
eb30dff986 Merge branch 'refs/heads/develop' 2012-10-13 00:00:44 +02:00
Ruud
9312336962 Merge branch 'refs/heads/develop' 2012-09-24 09:36:59 +02:00
Ruud
ade4338ea6 Merge branch 'refs/heads/develop' 2012-09-16 21:32:16 +02:00
Ruud
55b20324c0 Merge branch 'refs/heads/develop' 2012-09-16 12:36:48 +02:00
Ruud
c0fb28301d Merge branch 'refs/heads/develop'
Conflicts:
	version.py
2012-09-16 10:46:39 +02:00
Ruud
f9c2503f81 Merge branch 'refs/heads/develop' 2012-09-14 13:15:35 +02:00
Ruud
5b4cdf05b1 Merge branch 'refs/heads/develop' 2012-09-14 13:06:56 +02:00
Ruud
6f25a6bdfd Merge branch 'refs/heads/develop' 2012-09-03 10:32:09 +02:00
Ruud
23427e95f7 Merge branch 'refs/heads/develop' 2012-08-26 23:09:51 +02:00
Ruud
90a09e573b Merge branch 'refs/heads/develop'
Conflicts:
	couchpotato/core/_base/updater/main.py
2012-08-05 16:15:53 +02:00
Ruud
e1d7440b9d Wrong branch in master 2012-07-15 00:23:44 +02:00
281 changed files with 7038 additions and 3798 deletions

View File

@@ -100,7 +100,7 @@ class Loader(object):
logging.shutdown()
time.sleep(3)
args = [sys.executable] + [os.path.join(base_path, __file__)] + sys.argv[1:]
args = [sys.executable] + [os.path.join(base_path, os.path.basename(__file__))] + sys.argv[1:]
subprocess.Popen(args)
except:
self.log.critical(traceback.format_exc())

View File

@@ -1,5 +1,6 @@
from esky.util import appdir_from_executable #@UnresolvedImport
from threading import Thread
from version import VERSION
from wx.lib.softwareupdate import SoftwareUpdate
import os
import sys
@@ -165,7 +166,7 @@ class CouchPotatoApp(wx.App, SoftwareUpdate):
def OnInit(self):
# Updater
base_url = 'http://couchpota.to/updates/'
base_url = 'http://couchpota.to/updates/%s/' % VERSION
self.InitUpdates(base_url, base_url + 'changelog.html',
icon = wx.Icon('icon.png'))

View File

@@ -17,6 +17,7 @@ Windows, see [the CP forum](http://couchpota.to/forum/showthread.php?tid=14) for
* Open up `Git Bash` (or CMD) and go to the folder you want to install CP. Something like Program Files.
* Run `git clone https://github.com/RuudBurger/CouchPotatoServer.git`.
* You can now start CP via `CouchPotatoServer\CouchPotato.py` to start
* Your browser should open up, but if it doesn't go to: `http://localhost:5050/`
OSx:
@@ -26,6 +27,7 @@ OSx:
* Go to your App folder `cd /Applications`
* Run `git clone https://github.com/RuudBurger/CouchPotatoServer.git`
* Then do `python CouchPotatoServer/CouchPotato.py`
* Your browser should open up, but if it doesn't go to: `http://localhost:5050/`
Linux (ubuntu / debian):
@@ -37,3 +39,4 @@ Linux (ubuntu / debian):
* Change the paths inside the init script. `sudo nano /etc/init.d/couchpotato`
* Make it executable. `sudo chmod +x /etc/init.d/couchpotato`
* Add it to defaults. `sudo update-rc.d couchpotato defaults`
* Open your browser and go to: `http://localhost:5050/`

View File

@@ -1,6 +1,5 @@
from flask.blueprints import Blueprint
from flask.helpers import url_for
from tornado.ioloop import IOLoop
from tornado.web import RequestHandler, asynchronous
from werkzeug.utils import redirect
@@ -11,7 +10,11 @@ api_nonblock = {}
class NonBlockHandler(RequestHandler):
stoppers = []
def __init__(self, application, request, **kwargs):
cls = NonBlockHandler
cls.stoppers = []
super(NonBlockHandler, self).__init__(application, request, **kwargs)
@asynchronous
def get(self, route):

View File

@@ -9,6 +9,7 @@ from tornado.ioloop import IOLoop
from uuid import uuid4
import os
import platform
import signal
import time
import traceback
import webbrowser
@@ -51,6 +52,9 @@ class Core(Plugin):
addEvent('setting.save.core.password', self.md5Password)
addEvent('setting.save.core.api_key', self.checkApikey)
# Make sure we can close-down with ctrl+c properly
if not Env.get('desktop'):
self.signalHandler()
def md5Password(self, value):
return md5(value.encode(Env.get('encoding'))) if value else ''
@@ -66,7 +70,7 @@ class Core(Plugin):
def available(self):
return jsonified({
'succes': True
'success': True
})
def shutdown(self):
@@ -98,7 +102,7 @@ class Core(Plugin):
self.shutdown_started = True
fireEvent('app.shutdown')
fireEvent('app.do_shutdown')
log.debug('Every plugin got shutdown event')
loop = True
@@ -170,3 +174,10 @@ class Core(Plugin):
return jsonified({
'version': self.version()
})
def signalHandler(self):
def signal_handler(signal, frame):
fireEvent('app.do_shutdown')
signal.signal(signal.SIGINT, signal_handler)

View File

@@ -8,11 +8,9 @@ from couchpotato.environment import Env
from datetime import datetime
from dateutil.parser import parse
from git.repository import LocalRepository
import atexit
import json
import os
import shutil
import sys
import tarfile
import time
import traceback

View File

@@ -90,17 +90,18 @@ var UpdaterBase = new Class({
doUpdate: function(){
var self = this;
App.blockPage('Please wait while CouchPotato is being updated with more awesome stuff.', 'Updating');
Api.request('updater.update', {
'onComplete': function(json){
if(json.success){
if(json.success)
self.updating();
}
else
App.unBlockPage()
}
});
},
updating: function(){
App.blockPage('Please wait while CouchPotato is being updated with more awesome stuff.', 'Updating');
App.checkAvailable.delay(500, App, [1000, function(){
window.location.reload();
}]);

View File

@@ -1,23 +1,37 @@
from base64 import b32decode, b16encode
from couchpotato.core.event import addEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.providers.base import Provider
import random
import re
log = CPLog(__name__)
class Downloader(Plugin):
class Downloader(Provider):
type = []
http_time_between_calls = 0
torrent_sources = [
'http://torrage.com/torrent/%s.torrent',
'http://torrage.ws/torrent/%s.torrent',
'http://torcache.net/torrent/%s.torrent',
]
torrent_trackers = [
'http://tracker.publicbt.com/announce',
'udp://tracker.istole.it:80/announce',
'udp://fr33domtracker.h33t.com:3310/announce',
'http://tracker.istole.it/announce',
'http://tracker.ccc.de/announce',
'udp://tracker.publicbt.com:80/announce',
'udp://tracker.ccc.de:80/announce',
'http://exodus.desync.com/announce',
'http://exodus.desync.com:6969/announce',
'http://tracker.publichd.eu/announce',
'http://tracker.openbittorrent.com/announce',
]
def __init__(self):
addEvent('download', self.download)
addEvent('download.status', self.getAllDownloadStatus)

View File

@@ -1,4 +1,5 @@
from .main import Blackhole
from couchpotato.core.helpers.variable import getDownloadDir
def start():
return Blackhole()
@@ -16,7 +17,7 @@ config = [{
'options': [
{
'name': 'enabled',
'default': 0,
'default': True,
'type': 'enabler',
'radio_group': 'nzb,torrent',
},
@@ -24,6 +25,7 @@ config = [{
'name': 'directory',
'type': 'directory',
'description': 'Directory where the .nzb (or .torrent) file is saved to.',
'default': getDownloadDir()
},
{
'name': 'use_for',

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'nzbget',
'label': 'NZBGet',
'description': 'Send NZBs to your NZBGet installation.',
'description': 'Use <a href="http://nzbget.sourceforge.net/Main_Page" target="_blank">NZBGet</a> to download NZBs.',
'options': [
{
'name': 'enabled',
@@ -25,6 +25,7 @@ config = [{
},
{
'name': 'password',
'type': 'password',
'description': 'Default NZBGet password is <i>tegbzn6789</i>',
},
{

View File

@@ -0,0 +1,46 @@
from .main import NZBVortex
def start():
return NZBVortex()
config = [{
'name': 'nzbvortex',
'groups': [
{
'tab': 'downloaders',
'name': 'nzbvortex',
'label': 'NZBVortex',
'description': 'Use <a href="http://www.nzbvortex.com/landing/" target="_blank">NZBVortex</a> to download NZBs.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'https://localhost:4321',
},
{
'name': 'api_key',
'label': 'Api Key',
},
{
'name': 'manual',
'default': False,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -0,0 +1,176 @@
from base64 import b64encode
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import tryUrlencode, ss
from couchpotato.core.helpers.variable import cleanHost
from couchpotato.core.logger import CPLog
from urllib2 import URLError
from uuid import uuid4
import hashlib
import httplib
import json
import socket
import ssl
import sys
import traceback
import urllib2
log = CPLog(__name__)
class NZBVortex(Downloader):
type = ['nzb']
api_level = None
session_id = None
def download(self, data = {}, movie = {}, manual = False, filedata = None):
if self.isDisabled(manual) or not self.isCorrectType(data.get('type')) or not self.getApiLevel():
return
# Send the nzb
try:
nzb_filename = self.createFileName(data, filedata, movie)
self.call('nzb/add', params = {'file': (ss(nzb_filename), filedata)}, multipart = True)
return True
except:
log.error('Something went wrong sending the NZB file: %s', traceback.format_exc())
return False
def getAllDownloadStatus(self):
if self.isDisabled(manual = True):
return False
raw_statuses = self.call('nzb')
statuses = []
for item in raw_statuses.get('nzbs', []):
# Check status
status = 'busy'
if item['state'] == 20:
status = 'completed'
elif item['state'] in [21, 22, 24]:
status = 'failed'
statuses.append({
'id': item['id'],
'name': item['uiTitle'],
'status': status,
'original_status': item['state'],
'timeleft':-1,
})
return statuses
def removeFailed(self, item):
if not self.conf('delete_failed', default = True):
return False
log.info('%s failed downloading, deleting...', item['name'])
try:
self.call('nzb/%s/cancel' % item['id'])
except:
log.error('Failed deleting: %s', traceback.format_exc(0))
return False
return True
def login(self):
nonce = self.call('auth/nonce', auth = False).get('authNonce')
cnonce = uuid4().hex
hashed = b64encode(hashlib.sha256('%s:%s:%s' % (nonce, cnonce, self.conf('api_key'))).digest())
params = {
'nonce': nonce,
'cnonce': cnonce,
'hash': hashed
}
login_data = self.call('auth/login', parameters = params, auth = False)
# Save for later
if login_data.get('loginResult') == 'successful':
self.session_id = login_data.get('sessionID')
return True
log.error('Login failed, please check you api-key')
return False
def call(self, call, parameters = {}, repeat = False, auth = True, *args, **kwargs):
# Login first
if not self.session_id and auth:
self.login()
# Always add session id to request
if self.session_id:
parameters['sessionid'] = self.session_id
params = tryUrlencode(parameters)
url = cleanHost(self.conf('host')) + 'api/' + call
url_opener = urllib2.build_opener(HTTPSHandler())
try:
data = self.urlopen('%s?%s' % (url, params), opener = url_opener, *args, **kwargs)
if data:
return json.loads(data)
except URLError, e:
if hasattr(e, 'code') and e.code == 403:
# Try login and do again
if not repeat:
self.login()
return self.call(call, parameters = parameters, repeat = True, *args, **kwargs)
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return {}
def getApiLevel(self):
if not self.api_level:
url = cleanHost(self.conf('host')) + 'api/app/apilevel'
url_opener = urllib2.build_opener(HTTPSHandler())
try:
data = self.urlopen(url, opener = url_opener, show_error = False)
self.api_level = float(json.loads(data).get('apilevel'))
except URLError, e:
if hasattr(e, 'code') and e.code == 403:
log.error('This version of NZBVortex isn\'t supported. Please update to 2.8.6 or higher')
else:
log.error('NZBVortex doesn\'t seem to be running or maybe the remote option isn\'t enabled yet: %s', traceback.format_exc(1))
return self.api_level
class HTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
def connect(self):
sock = socket.create_connection((self.host, self.port), self.timeout)
if sys.version_info < (2, 6, 7):
if hasattr(self, '_tunnel_host'):
self.sock = sock
self._tunnel()
else:
if self._tunnel_host:
self.sock = sock
self._tunnel()
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ssl_version = ssl.PROTOCOL_TLSv1)
class HTTPSHandler(urllib2.HTTPSHandler):
def https_open(self, req):
return self.do_open(HTTPSConnection, req)

View File

@@ -11,7 +11,7 @@ config = [{
'tab': 'downloaders',
'name': 'pneumatic',
'label': 'Pneumatic',
'description': 'Download the .strm file to a specific folder.',
'description': 'Use <a href="http://forum.xbmc.org/showthread.php?tid=97657" target="_blank">Pneumatic</a> to download .strm files.',
'options': [
{
'name': 'enabled',

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'sabnzbd',
'label': 'Sabnzbd',
'description': 'Send NZBs to your Sabnzbd installation.',
'description': 'Use <a href="http://sabnzbd.org/" target="_blank">SABnzbd</a> to download NZBs.',
'wizard': True,
'options': [
{

View File

@@ -1,5 +1,5 @@
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.encoding import tryUrlencode, ss
from couchpotato.core.helpers.variable import cleanHost, mergeDicts
from couchpotato.core.logger import CPLog
from urllib2 import URLError
@@ -28,7 +28,7 @@ class Sabnzbd(Downloader):
if filedata:
if len(filedata) < 50:
log.error('No proper nzb available!')
log.error('No proper nzb available: %s', (filedata))
return False
# If it's a .rar, it adds the .rar extension, otherwise it stays .nzb
@@ -41,7 +41,7 @@ class Sabnzbd(Downloader):
try:
if params.get('mode') is 'addfile':
sab = self.urlopen(url, timeout = 60, params = {'nzbfile': (nzb_filename, filedata)}, multipart = True, show_error = False)
sab = self.urlopen(url, timeout = 60, params = {'nzbfile': (ss(nzb_filename), filedata)}, multipart = True, show_error = False)
else:
sab = self.urlopen(url, timeout = 60, show_error = False)
except URLError:
@@ -65,7 +65,7 @@ class Sabnzbd(Downloader):
return False
def getAllDownloadStatus(self):
if self.isDisabled(manual = False):
if self.isDisabled(manual = True):
return False
log.debug('Checking SABnzbd download status.')

View File

@@ -0,0 +1,44 @@
from .main import Synology
def start():
return Synology()
config = [{
'name': 'synology',
'groups': [
{
'tab': 'downloaders',
'name': 'synology',
'label': 'Synology',
'description': 'Use <a href="http://www.synology.com/dsm/home_home_applications_download_station.php" target="_blank">Synology Download Station</a> to download.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:5000',
'description': 'Hostname with port. Usually <strong>localhost:5000</strong>',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -0,0 +1,108 @@
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import isInt
from couchpotato.core.logger import CPLog
import httplib
import json
import urllib
import urllib2
log = CPLog(__name__)
class Synology(Downloader):
type = ['torrent_magnet']
log = CPLog(__name__)
def download(self, data, movie, manual = False, filedata = None):
if self.isDisabled(manual) or not self.isCorrectType(data.get('type')):
return
log.error('Sending "%s" (%s) to Synology.', (data.get('name'), data.get('type')))
# Load host from config and split out port.
host = self.conf('host').split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
if data.get('type') == 'torrent':
log.error('Can\'t add binary torrent file')
return False
try:
# Send request to Transmission
srpc = SynologyRPC(host[0], host[1], self.conf('username'), self.conf('password'))
remote_torrent = srpc.add_torrent_uri(data.get('url'))
log.info('Response: %s', remote_torrent)
return remote_torrent['success']
except Exception, err:
log.error('Exception while adding torrent: %s', err)
return False
class SynologyRPC(object):
'''SynologyRPC lite library'''
def __init__(self, host = 'localhost', port = 5000, username = None, password = None):
super(SynologyRPC, self).__init__()
self.download_url = 'http://%s:%s/webapi/DownloadStation/task.cgi' % (host, port)
self.auth_url = 'http://%s:%s/webapi/auth.cgi' % (host, port)
self.username = username
self.password = password
self.session_name = 'DownloadStation'
def _login(self):
if self.username and self.password:
args = {'api': 'SYNO.API.Auth', 'account': self.username, 'passwd': self.password, 'version': 2,
'method': 'login', 'session': self.session_name, 'format': 'sid'}
response = self._req(self.auth_url, args)
if response['success'] == True:
self.sid = response['data']['sid']
log.debug('Sid=%s', self.sid)
return response
elif self.username or self.password:
log.error('User or password missing, not using authentication.')
return False
def _logout(self):
args = {'api':'SYNO.API.Auth', 'version':1, 'method':'logout', 'session':self.session_name, '_sid':self.sid}
return self._req(self.auth_url, args)
def _req(self, url, args):
req_url = url + '?' + urllib.urlencode(args)
try:
req_open = urllib2.urlopen(req_url)
response = json.loads(req_open.read())
if response['success'] == True:
log.info('Synology action successfull')
return response
except httplib.InvalidURL, err:
log.error('Invalid Transmission host, check your config %s', err)
return False
except urllib2.HTTPError, err:
log.error('SynologyRPC HTTPError: %s', err)
return False
except urllib2.URLError, err:
log.error('Unable to connect to Synology %s', err)
return False
def add_torrent_uri(self, torrent):
log.info('Adding torrent URL %s', torrent)
response = {}
# login
login = self._login()
if len(login) > 0 and login['success'] == True:
log.info('Login success, adding torrent')
args = {'api':'SYNO.DownloadStation.Task', 'version':1, 'method':'create', 'uri':torrent, '_sid':self.sid}
response = self._req(self.download_url, args)
self._logout()
else:
log.error('Couldn\'t login to Synology, %s', login)
return response

View File

@@ -10,7 +10,7 @@ config = [{
'tab': 'downloaders',
'name': 'transmission',
'label': 'Transmission',
'description': 'Send torrents to Transmission.',
'description': 'Use <a href="http://www.transmissionbt.com/" target="_blank">Transmission</a> to download torrents.',
'wizard': True,
'options': [
{

View File

@@ -30,9 +30,15 @@ class Transmission(Downloader):
return False
# Set parameters for Transmission
folder_name = self.createFileName(data, filedata, movie)[:-len(data.get('type')) - 1]
folder_path = os.path.join(self.conf('directory', default = ''), folder_name).rstrip(os.path.sep)
# Create the empty folder to download too
self.makeDir(folder_path)
params = {
'paused': self.conf('paused', default = 0),
'download-dir': self.conf('directory', default = '').rstrip(os.path.sep)
'download-dir': folder_path
}
torrent_params = {
@@ -49,6 +55,7 @@ class Transmission(Downloader):
trpc = TransmissionRPC(host[0], port = host[1], username = self.conf('username'), password = self.conf('password'))
if data.get('type') == 'torrent_magnet':
remote_torrent = trpc.add_torrent_uri(data.get('url'), arguments = params)
torrent_params['trackerAdd'] = self.torrent_trackers
else:
remote_torrent = trpc.add_torrent_file(b64encode(filedata), arguments = params)

View File

@@ -0,0 +1,54 @@
from .main import uTorrent
def start():
return uTorrent()
config = [{
'name': 'utorrent',
'groups': [
{
'tab': 'downloaders',
'name': 'utorrent',
'label': 'uTorrent',
'description': 'Use <a href="http://www.utorrent.com/" target="_blank">uTorrent</a> to download torrents.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:8000',
'description': 'Hostname with port. Usually <strong>localhost:8000</strong>',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'label',
'description': 'Label to add torrent as.',
},
{
'name': 'paused',
'type': 'bool',
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -0,0 +1,138 @@
from bencode import bencode, bdecode
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import isInt
from couchpotato.core.logger import CPLog
from hashlib import sha1
from multipartpost import MultipartPostHandler
import cookielib
import httplib
import re
import time
import urllib
import urllib2
log = CPLog(__name__)
class uTorrent(Downloader):
type = ['torrent', 'torrent_magnet']
utorrent_api = None
def download(self, data, movie, manual = False, filedata = None):
if self.isDisabled(manual) or not self.isCorrectType(data.get('type')):
return
log.debug('Sending "%s" (%s) to uTorrent.', (data.get('name'), data.get('type')))
# Load host from config and split out port.
host = self.conf('host').split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
torrent_params = {}
if self.conf('label'):
torrent_params['label'] = self.conf('label')
if not filedata and data.get('type') == 'torrent':
log.error('Failed sending torrent, no data')
return False
if data.get('type') == 'torrent_magnet':
torrent_hash = re.findall('urn:btih:([\w]{32,40})', data.get('url'))[0].upper()
torrent_params['trackers'] = '%0D%0A%0D%0A'.join(self.torrent_trackers)
else:
info = bdecode(filedata)["info"]
torrent_hash = sha1(bencode(info)).hexdigest().upper()
torrent_filename = self.createFileName(data, filedata, movie)
# Send request to uTorrent
try:
if not self.utorrent_api:
self.utorrent_api = uTorrentAPI(host[0], port = host[1], username = self.conf('username'), password = self.conf('password'))
if data.get('type') == 'torrent_magnet':
self.utorrent_api.add_torrent_uri(data.get('url'))
else:
self.utorrent_api.add_torrent_file(torrent_filename, filedata)
# Change settings of added torrents
self.utorrent_api.set_torrent(torrent_hash, torrent_params)
if self.conf('paused', default = 0):
self.utorrent_api.pause_torrent(torrent_hash)
return True
except Exception, err:
log.error('Failed to send torrent to uTorrent: %s', err)
return False
class uTorrentAPI(object):
def __init__(self, host = 'localhost', port = 8000, username = None, password = None):
super(uTorrentAPI, self).__init__()
self.url = 'http://' + str(host) + ':' + str(port) + '/gui/'
self.token = ''
self.last_time = time.time()
cookies = cookielib.CookieJar()
self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies), MultipartPostHandler)
self.opener.addheaders = [('User-agent', 'couchpotato-utorrent-client/1.0')]
if username and password:
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(realm = None, uri = self.url, user = username, passwd = password)
self.opener.add_handler(urllib2.HTTPBasicAuthHandler(password_manager))
self.opener.add_handler(urllib2.HTTPDigestAuthHandler(password_manager))
elif username or password:
log.debug('User or password missing, not using authentication.')
self.token = self.get_token()
def _request(self, action, data = None):
if time.time() > self.last_time + 1800:
self.last_time = time.time()
self.token = self.get_token()
request = urllib2.Request(self.url + "?token=" + self.token + "&" + action, data)
try:
open_request = self.opener.open(request)
response = open_request.read()
log.debug('response: %s', response)
if response:
log.debug('uTorrent action successfull')
return response
else:
log.debug('Unknown failure sending command to uTorrent. Return text is: %s', response)
except httplib.InvalidURL, err:
log.error('Invalid uTorrent host, check your config %s', err)
except urllib2.HTTPError, err:
if err.code == 401:
log.error('Invalid uTorrent Username or Password, check your config')
else:
log.error('uTorrent HTTPError: %s', err)
except urllib2.URLError, err:
log.error('Unable to connect to uTorrent %s', err)
return False
def get_token(self):
request = self.opener.open(self.url + "token.html")
token = re.findall("<div.*?>(.*?)</", request.read())[0]
return token
def add_torrent_uri(self, torrent):
action = "action=add-url&s=%s" % urllib.quote(torrent)
return self._request(action)
def add_torrent_file(self, filename, filedata):
action = "action=add-file"
return self._request(action, {"torrent_file": (filename, filedata)})
def set_torrent(self, hash, params):
action = "action=setprops&hash=%s" % hash
for k, v in params.iteritems():
action += "&s=%s&v=%s" % (k, v)
return self._request(action)
def pause_torrent(self, hash):
action = "action=pause&hash=%s" % hash
return self._request(action)

View File

@@ -12,7 +12,7 @@ def runHandler(name, handler, *args, **kwargs):
return handler(*args, **kwargs)
except:
from couchpotato.environment import Env
log.error('Error in event "%s", that wasn\'t caught: %s%s', (name, traceback.format_exc(), Env.all()))
log.error('Error in event "%s", that wasn\'t caught: %s%s', (name, traceback.format_exc(), Env.all() if not Env.get('dev') else ''))
def addEvent(name, handler, priority = 100):
@@ -63,11 +63,21 @@ def fireEvent(name, *args, **kwargs):
except: pass
e = events[name]
if not options['in_order']: e.lock.acquire()
# Lock this event
e.lock.acquire()
e.asynchronous = False
e.in_order = options['in_order']
# Make sure only 1 event is fired at a time when order is wanted
kwargs['event_order_lock'] = threading.RLock() if options['in_order'] or options['single'] else None
kwargs['event_return_on_result'] = options['single']
# Fire
result = e(*args, **kwargs)
if not options['in_order']: e.lock.release()
# Release lock for this event
e.lock.release()
if options['single'] and not options['merge']:
results = None
@@ -95,14 +105,14 @@ def fireEvent(name, *args, **kwargs):
# Merge
if options['merge'] and len(results) > 0:
# Dict
if type(results[0]) == dict:
if isinstance(results[0], dict):
merged = {}
for result in results:
merged = mergeDicts(merged, result)
results = merged
# Lists
elif type(results[0]) == list:
elif isinstance(results[0], list):
merged = []
for result in results:
merged += result

View File

@@ -68,7 +68,7 @@ def tryUrlencode(s):
return new[1:]
else:
for letter in toUnicode(s):
for letter in ss(s):
try:
new += quote_plus(letter)
except:

View File

@@ -1,3 +1,4 @@
from couchpotato.core.helpers.encoding import simplifyString, toSafeString
from couchpotato.core.logger import CPLog
import hashlib
import os.path
@@ -9,15 +10,34 @@ import sys
log = CPLog(__name__)
def getUserDir():
try:
import pwd
os.environ['HOME'] = pwd.getpwuid(os.geteuid()).pw_dir
except:
pass
return os.path.expanduser('~')
def getDownloadDir():
user_dir = getUserDir()
# OSX
if 'darwin' in platform.platform().lower():
return os.path.join(user_dir, 'Downloads')
if os.name == 'nt':
return os.path.join(user_dir, 'Downloads')
return user_dir
def getDataDir():
# Windows
if os.name == 'nt':
return os.path.join(os.environ['APPDATA'], 'CouchPotato')
import pwd
os.environ['HOME'] = pwd.getpwuid(os.geteuid()).pw_dir
user_dir = os.path.expanduser('~')
user_dir = getUserDir()
# OSX
if 'darwin' in platform.platform().lower():
@@ -84,7 +104,7 @@ def cleanHost(host):
return host
def getImdb(txt, check_inside = True):
def getImdb(txt, check_inside = True, multiple = False):
if check_inside and os.path.isfile(txt):
output = open(txt, 'r')
@@ -92,8 +112,10 @@ def getImdb(txt, check_inside = True):
output.close()
try:
id = re.findall('(tt\d{7})', txt)[0]
return id
ids = re.findall('(tt\d{7})', txt)
if multiple:
return ids if len(ids) > 0 else []
return ids[0]
except IndexError:
pass
@@ -132,6 +154,16 @@ def getTitle(library_dict):
log.error('Could not get title for library item: %s', library_dict)
return None
def possibleTitles(raw_title):
titles = []
titles.append(toSafeString(raw_title).lower())
titles.append(raw_title.lower())
titles.append(simplifyString(raw_title))
return list(set(titles))
def randomString(size = 8, chars = string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))

View File

@@ -17,6 +17,9 @@ class CPLog(object):
def info(self, msg, replace_tuple = ()):
self.logger.info(self.addContext(msg, replace_tuple))
def info2(self, msg, replace_tuple = ()):
self.logger.log(19, self.addContext(msg, replace_tuple))
def debug(self, msg, replace_tuple = ()):
self.logger.debug(self.addContext(msg, replace_tuple))
@@ -53,7 +56,8 @@ class CPLog(object):
if not Env.get('dev'):
for replace in self.replace_private:
msg = re.sub('(%s=)[^\&]+' % replace, '%s=xxx' % replace, msg)
msg = re.sub('(\?%s=)[^\&]+' % replace, '?%s=xxx' % replace, msg)
msg = re.sub('(&%s=)[^\&]+' % replace, '&%s=xxx' % replace, msg)
# Replace api key
try:

View File

@@ -37,8 +37,11 @@ class Growl(Notification):
)
self.growl.register()
self.registered = True
except:
log.error('Failed register of growl: %s', traceback.format_exc())
except Exception, e:
if 'timed out' in str(e):
self.registered = True
else:
log.error('Failed register of growl: %s', traceback.format_exc())
def notify(self, message = '', data = {}, listener = None):
if self.isDisabled(): return

View File

@@ -0,0 +1,32 @@
from .main import Toasty
def start():
return Toasty()
config = [{
'name': 'toasty',
'groups': [
{
'tab': 'notifications',
'name': 'toasty',
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
},
{
'name': 'api_key',
'label': 'Device ID',
},
{
'name': 'on_snatch',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Also send message when movie is snatched.',
},
],
}
],
}]

View File

@@ -0,0 +1,30 @@
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.logger import CPLog
from couchpotato.core.notifications.base import Notification
import traceback
log = CPLog(__name__)
class Toasty(Notification):
urls = {
'api': 'http://api.supertoasty.com/notify/%s?%s'
}
def notify(self, message = '', data = {}, listener = None):
if self.isDisabled(): return
data = {
'title': self.default_title,
'text': toUnicode(message),
'sender': toUnicode("CouchPotato"),
'image': 'https://raw.github.com/RuudBurger/CouchPotatoServer/master/couchpotato/static/images/homescreen.png',
}
try:
self.urlopen(self.urls['api'] % (self.conf('api_key'), tryUrlencode(data)), show_error = False)
return True
except:
log.error('Toasty failed: %s', traceback.format_exc())
return False

View File

@@ -10,6 +10,7 @@ config = [{
'tab': 'notifications',
'name': 'xbmc',
'label': 'XBMC',
'description': 'v11 (Eden) and v12 (Frodo)',
'options': [
{
'name': 'enabled',

View File

@@ -3,6 +3,8 @@ from couchpotato.core.logger import CPLog
from couchpotato.core.notifications.base import Notification
from flask.helpers import json
import base64
import traceback
import urllib
log = CPLog(__name__)
@@ -10,24 +12,147 @@ log = CPLog(__name__)
class XBMC(Notification):
listen_to = ['renamer.after']
use_json_notifications = {}
def notify(self, message = '', data = {}, listener = None):
if self.isDisabled(): return
hosts = splitString(self.conf('host'))
successful = 0
for host in hosts:
response = self.request(host, [
('GUI.ShowNotification', {"title":"CouchPotato", "message":message}),
('VideoLibrary.Scan', {}),
])
for result in response:
if result['result'] == "OK":
successful += 1
if self.use_json_notifications.get(host) is None:
self.getXBMCJSONversion(host, message = message)
if self.use_json_notifications.get(host):
response = self.request(host, [
('GUI.ShowNotification', {'title':self.default_title, 'message':message}),
('VideoLibrary.Scan', {}),
])
else:
response = self.notifyXBMCnoJSON(host, {'title':self.default_title, 'message':message})
response += self.request(host, [('VideoLibrary.Scan', {})])
try:
for result in response:
if (result.get('result') and result['result'] == 'OK'):
successful += 1
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
except:
log.error('Failed parsing results: %s', traceback.format_exc())
return successful == len(hosts) * 2
def getXBMCJSONversion(self, host, message = ''):
success = False
# XBMC JSON-RPC version request
response = self.request(host, [
('JSONRPC.Version', {})
])
for result in response:
if (result.get('result') and type(result['result']['version']).__name__ == 'int'):
# only v2 and v4 return an int object
# v6 (as of XBMC v12(Frodo)) is required to send notifications
xbmc_rpc_version = str(result['result']['version'])
log.debug('XBMC JSON-RPC Version: %s ; Notifications by JSON-RPC only supported for v6 [as of XBMC v12(Frodo)]', xbmc_rpc_version)
# disable JSON use
self.use_json_notifications[host] = False
# send the text message
resp = self.notifyXBMCnoJSON(host, {'title':self.default_title, 'message':message})
for result in resp:
if (result.get('result') and result['result'] == 'OK'):
log.debug('Message delivered successfully!')
success = True
break
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
break
elif (result.get('result') and type(result['result']['version']).__name__ == 'dict'):
# XBMC JSON-RPC v6 returns an array object containing
# major, minor and patch number
xbmc_rpc_version = str(result['result']['version']['major'])
xbmc_rpc_version += '.' + str(result['result']['version']['minor'])
xbmc_rpc_version += '.' + str(result['result']['version']['patch'])
log.debug('XBMC JSON-RPC Version: %s', xbmc_rpc_version)
# ok, XBMC version is supported
self.use_json_notifications[host] = True
# send the text message
resp = self.request(host, [('GUI.ShowNotification', {'title':self.default_title, 'message':message})])
for result in resp:
if (result.get('result') and result['result'] == 'OK'):
log.debug('Message delivered successfully!')
success = True
break
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
break
# error getting version info (we do have contact with XBMC though)
elif (result.get('error')):
log.error('XBMC error; %s: %s (%s)', (result['id'], result['error']['message'], result['error']['code']))
log.debug('Use JSON notifications: %s ', self.use_json_notifications)
return success
def notifyXBMCnoJSON(self, host, data):
server = 'http://%s/xbmcCmds/' % host
# title, message [, timeout , image #can be added!]
cmd = "xbmcHttp?command=ExecBuiltIn(Notification('%s','%s'))" % (urllib.quote(data['title']), urllib.quote(data['message']))
server += cmd
# I have no idea what to set to, just tried text/plain and seems to be working :)
headers = {
'Content-Type': 'text/plain',
}
# authentication support
if self.conf('password'):
base64string = base64.encodestring('%s:%s' % (self.conf('username'), self.conf('password'))).replace('\n', '')
headers['Authorization'] = 'Basic %s' % base64string
try:
log.debug('Sending non-JSON-type request to %s: %s', (host, data))
# response wil either be 'OK':
# <html>
# <li>OK
# </html>
#
# or 'Error':
# <html>
# <li>Error:<message>
# </html>
#
response = self.urlopen(server, headers = headers)
if 'OK' in response:
log.debug('Returned from non-JSON-type request %s: %s', (host, response))
# manually fake expected response array
return [{'result': 'OK'}]
else:
log.error('Returned from non-JSON-type request %s: %s', (host, response))
# manually fake expected response array
return [{'result': 'Error'}]
except:
log.error('Failed sending non-JSON-type request to XBMC: %s', traceback.format_exc())
return [{'result': 'Error'}]
def request(self, host, requests):
server = 'http://%s/jsonrpc' % host
@@ -50,10 +175,14 @@ class XBMC(Notification):
base64string = base64.encodestring('%s:%s' % (self.conf('username'), self.conf('password'))).replace('\n', '')
headers['Authorization'] = 'Basic %s' % base64string
log.debug('Sending request to %s: %s', (host, data))
rdata = self.urlopen(server, headers = headers, params = data, multipart = True)
response = json.loads(rdata)
log.debug('Returned from request %s: %s', (host, response))
try:
log.debug('Sending request to %s: %s', (host, data))
rdata = self.urlopen(server, headers = headers, params = data, multipart = True)
response = json.loads(rdata)
log.debug('Returned from request %s: %s', (host, response))
return response
return response
except:
log.error('Failed sending request to XBMC: %s', traceback.format_exc())
return []

View File

@@ -12,7 +12,7 @@ class Automation(Plugin):
fireEvent('schedule.interval', 'automation.add_movies', self.addMovies, hours = self.conf('hour', default = 12))
if not Env.get('dev'):
if Env.get('dev'):
addEvent('app.load', self.addMovies)
def addMovies(self):

View File

@@ -1,9 +1,8 @@
from StringIO import StringIO
from couchpotato import addView
from couchpotato.core.event import fireEvent, addEvent
from couchpotato.core.helpers.encoding import tryUrlencode, simplifyString, ss, \
toSafeString
from couchpotato.core.helpers.variable import getExt
from couchpotato.core.helpers.encoding import tryUrlencode, ss, toSafeString
from couchpotato.core.helpers.variable import getExt, md5
from couchpotato.core.logger import CPLog
from couchpotato.environment import Env
from flask.templating import render_template_string
@@ -35,7 +34,7 @@ class Plugin(object):
http_failed_disabled = {}
def registerPlugin(self):
addEvent('app.shutdown', self.doShutdown)
addEvent('app.do_shutdown', self.doShutdown)
addEvent('plugin.running', self.isRunning)
def conf(self, attr, value = None, default = None):
@@ -99,6 +98,7 @@ class Plugin(object):
# http request
def urlopen(self, url, timeout = 30, params = None, headers = None, opener = None, multipart = False, show_error = True):
url = ss(url)
if not headers: headers = {}
if not params: params = {}
@@ -114,8 +114,11 @@ class Plugin(object):
# Don't try for failed requests
if self.http_failed_disabled.get(host, 0) > 0:
if self.http_failed_disabled[host] > (time.time() - 900):
log.info('Disabled calls to %s for 15 minutes because so many failed requests.', host)
raise Exception
log.info2('Disabled calls to %s for 15 minutes because so many failed requests.', host)
if not show_error:
raise
else:
return ''
else:
del self.http_failed_request[host]
del self.http_failed_disabled[host]
@@ -127,8 +130,11 @@ class Plugin(object):
log.info('Opening multipart url: %s, params: %s', (url, [x for x in params.iterkeys()] if isinstance(params, dict) else 'with data'))
request = urllib2.Request(url, params, headers)
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies), MultipartPostHandler)
if opener:
opener.add_handler(MultipartPostHandler())
else:
cookies = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies), MultipartPostHandler)
response = opener.open(request, timeout = timeout)
else:
@@ -219,7 +225,7 @@ class Plugin(object):
def getCache(self, cache_key, url = None, **kwargs):
cache_key = simplifyString(cache_key)
cache_key = md5(ss(cache_key))
cache = Env.get('cache').get(cache_key)
if cache:
if not Env.get('dev'): log.debug('Getting cache %s', cache_key)
@@ -239,9 +245,11 @@ class Plugin(object):
self.setCache(cache_key, data, timeout = cache_timeout)
return data
except:
if not kwargs.get('show_error'):
if not kwargs.get('show_error', True):
raise
return ''
def setCache(self, cache_key, value, timeout = 300):
log.debug('Setting cache %s', cache_key)
Env.get('cache').set(cache_key, value, timeout)

View File

@@ -1,5 +1,6 @@
from couchpotato.api import addApiView
from couchpotato.core.helpers.request import getParam, jsonified
from couchpotato.core.helpers.variable import getUserDir
from couchpotato.core.plugins.base import Plugin
import ctypes
import os
@@ -65,15 +66,7 @@ class FileBrowser(Plugin):
def view(self):
path = getParam('path', '/')
# Set proper home dir for some systems
try:
import pwd
os.environ['HOME'] = pwd.getpwuid(os.geteuid()).pw_dir
except:
pass
home = os.path.expanduser('~')
home = getUserDir()
if not path:
path = home

View File

@@ -66,10 +66,12 @@ class FileManager(Plugin):
time.sleep(3)
log.debug('Cleaning up unused files')
python_cache = Env.get('cache')._path
try:
db = get_session()
for root, dirs, walk_files in os.walk(Env.get('cache_dir')):
for filename in walk_files:
if root == python_cache: continue
file_path = os.path.join(root, filename)
f = db.query(File).filter(File.path == toUnicode(file_path)).first()
if not f:

View File

@@ -115,12 +115,35 @@ class Manage(Plugin):
if done_movie['library']['identifier'] not in added_identifiers:
fireEvent('movie.delete', movie_id = done_movie['id'], delete_from = 'all')
else:
for release in done_movie.get('releases', []):
for release_file in release.get('files', []):
# Remove release not available anymore
if not os.path.isfile(ss(release_file['path'])):
fireEvent('release.clean', release['id'])
break
if len(release.get('files', [])) == 0:
fireEvent('release.delete', release['id'])
else:
for release_file in release.get('files', []):
# Remove release not available anymore
if not os.path.isfile(ss(release_file['path'])):
fireEvent('release.clean', release['id'])
break
# Check if there are duplicate releases (different quality) use the last one, delete the rest
if len(done_movie.get('releases', [])) > 1:
used_files = {}
for release in done_movie.get('releases', []):
for release_file in release.get('files', []):
already_used = used_files.get(release_file['path'])
if already_used:
print already_used, release['id']
if already_used < release['id']:
fireEvent('release.delete', release['id'], single = True) # delete this one
else:
fireEvent('release.delete', already_used, single = True) # delete previous one
break
else:
used_files[release_file['path']] = release.get('id')
del used_files
Env.prop('manage.last_update', time.time())
except:
@@ -153,7 +176,7 @@ class Manage(Plugin):
'to_go': total_found,
}
if group['library']:
if group['library'] and group['library'].get('identifier'):
identifier = group['library'].get('identifier')
added_identifiers.append(identifier)
@@ -187,5 +210,5 @@ class Manage(Plugin):
groups = fireEvent('scanner.scan', folder = folder, files = files, single = True)
for group in groups.itervalues():
if group['library']:
if group['library'] and group['library'].get('identifier'):
fireEvent('release.add', group = group)

View File

@@ -300,10 +300,11 @@ var MovieList = new Class({
},
deleteSelected: function(){
var self = this;
var ids = self.getSelectedMovies()
var self = this,
ids = self.getSelectedMovies(),
help_msg = self.identifier == 'wanted' ? 'If you do, you won\'t be able to watch them, as they won\'t get downloaded!' : 'Your files will be safe, this will only delete the reference from the CouchPotato manage list';
var qObj = new Question('Are you sure you want to delete '+ids.length+' movie'+ (ids.length != 1 ? 's' : '') +'?', 'If you do, you won\'t be able to watch them, as they won\'t get downloaded!', [{
var qObj = new Question('Are you sure you want to delete '+ids.length+' movie'+ (ids.length != 1 ? 's' : '') +'?', help_msg, [{
'text': 'Yes, delete '+(ids.length != 1 ? 'them' : 'it'),
'class': 'delete',
'events': {

View File

@@ -479,29 +479,32 @@ var ReleaseAction = new Class({
self.release_container.getElement('#release_'+self.next_release.id).addClass('next_release');
}
self.trynext_container.adopt(
new Element('span.or', {
'text': 'This movie is snatched, if anything went wrong, download'
}),
self.last_release ? new Element('a.button.orange', {
'text': 'the same release again',
'events': {
'click': self.trySameRelease.bind(self)
}
}) : null,
self.next_release && self.last_release ? new Element('span.or', {
'text': ','
}) : null,
self.next_release ? [new Element('a.button.green', {
'text': self.last_release ? 'another release' : 'the best release',
'events': {
'click': self.tryNextRelease.bind(self)
}
}),
new Element('span.or', {
'text': 'or pick one below'
})] : null
)
if(self.next_release || self.last_release){
self.trynext_container.adopt(
new Element('span.or', {
'text': 'This movie is snatched, if anything went wrong, download'
}),
self.last_release ? new Element('a.button.orange', {
'text': 'the same release again',
'events': {
'click': self.trySameRelease.bind(self)
}
}) : null,
self.next_release && self.last_release ? new Element('span.or', {
'text': ','
}) : null,
self.next_release ? [new Element('a.button.green', {
'text': self.last_release ? 'another release' : 'the best release',
'events': {
'click': self.tryNextRelease.bind(self)
}
}),
new Element('span.or', {
'text': 'or pick one below'
})] : null
)
}
}

View File

@@ -91,10 +91,15 @@
.movie_result {
overflow: hidden;
height: 140px;
position: relative;
}
.movie_result .options {
height: 140px;
position: absolute;
height: 100%;
width: 100%;
top: 0;
left: 0;
border: 1px solid transparent;
border-width: 1px 0;
border-radius: 0;
@@ -133,17 +138,21 @@
.movie_result .data {
padding: 0 15px;
width: 470px;
position: relative;
height: 140px;
position: absolute;
height: 100%;
width: 100%;
top: 0;
margin: -140px 0 0 0;
left: 0;
background: #5c697b;
cursor: pointer;
border-bottom: 1px solid #333;
border-top: 1px solid rgba(255,255,255, 0.15);
transition: all .6s cubic-bezier(0.9,0,0.1,1);
}
.movie_result .data.open {
left: 100%;
}
.movie_result:last-child .data { border-bottom: 0; }
@@ -193,4 +202,9 @@
.search_form .mask {
border-radius: 3px;
position: absolute;
height: 100%;
width: 100%;
left: 0;
top: 0;
}

View File

@@ -19,7 +19,9 @@ Block.Search = new Class({
self.hideResults(false)
},
'blur': function(){
self.el.removeClass('focused')
(function(){
self.el.removeClass('focused')
}).delay(2000);
}
}
}),
@@ -117,7 +119,7 @@ Block.Search = new Class({
self.hideResults(false);
if(!cache){
self.positionMask().fade('in');
self.mask.fade('in');
if(!self.spinner)
self.spinner = createSpinner(self.mask);
@@ -139,7 +141,6 @@ Block.Search = new Class({
fill: function(q, json){
var self = this;
self.positionMask()
self.cache[q] = json
self.movies = {}
@@ -168,19 +169,6 @@ Block.Search = new Class({
},
positionMask: function(){
var self = this;
var s = self.result_container.getSize()
return self.mask.setStyles({
'width': s.x,
'height': s.y
}).position({
'relativeTo': self.result_container
})
},
loading: function(bool){
this.el[bool ? 'addClass' : 'removeClass']('loading')
},
@@ -193,7 +181,7 @@ Block.Search = new Class({
Block.Search.Item = new Class({
initialize: function(info){
initialize: function(info, options){
var self = this;
self.info = info;
@@ -203,14 +191,13 @@ Block.Search.Item = new Class({
},
create: function(){
var self = this;
var info = self.info;
var self = this,
info = self.info;
self.el = new Element('div.movie_result', {
'id': info.imdb
}).adopt(
self.options = new Element('div.options.inlay'),
self.options_el = new Element('div.options.inlay'),
self.data_container = new Element('div.data', {
'tween': {
duration: 400,
@@ -273,11 +260,7 @@ Block.Search.Item = new Class({
self.createOptions();
if(!self.width)
self.width = self.data_container.getCoordinates().width
self.data_container.tween('left', 0, self.width);
self.data_container.addClass('open');
self.el.addEvent('outerClick', self.closeOptions.bind(self))
},
@@ -285,7 +268,7 @@ Block.Search.Item = new Class({
closeOptions: function(){
var self = this;
self.data_container.tween('left', self.width, 0);
self.data_container.removeClass('open');
self.el.removeEvents('outerClick')
},
@@ -302,28 +285,31 @@ Block.Search.Item = new Class({
'profile_id': self.profile_select.get('value')
},
'onComplete': function(json){
self.options.empty();
self.options.adopt(
self.options_el.empty();
self.options_el.adopt(
new Element('div.message', {
'text': json.added ? 'Movie succesfully added.' : 'Movie didn\'t add properly. Check logs'
'text': json.added ? 'Movie successfully added.' : 'Movie didn\'t add properly. Check logs'
})
);
self.mask.fade('out');
},
'onFailure': function(){
self.options.empty();
self.options.adopt(
self.options_el.empty();
self.options_el.adopt(
new Element('div.message', {
'text': 'Something went wrong, check the logs for more info.'
})
);
self.mask.fade('out');
}
});
},
createOptions: function(){
var self = this;
var self = this,
info = self.info;
if(!self.options.hasClass('set')){
if(!self.options_el.hasClass('set')){
if(self.info.in_library){
var in_library = [];
@@ -332,10 +318,10 @@ Block.Search.Item = new Class({
});
}
self.options.adopt(
self.options_el.grab(
new Element('div').adopt(
self.option_thumbnail = self.info.images && self.info.images.poster.length > 0 ? new Element('img.thumbnail', {
'src': self.info.images.poster[0],
self.thumbnail = (info.images && info.images.poster.length > 0) ? new Element('img.thumbnail', {
'src': info.images.poster[0],
'height': null,
'width': null
}) : null,
@@ -372,7 +358,7 @@ Block.Search.Item = new Class({
}).inject(self.profile_select)
});
self.options.addClass('set');
self.options_el.addClass('set');
}
},
@@ -380,17 +366,7 @@ Block.Search.Item = new Class({
loadingMask: function(){
var self = this;
var s = self.options.getSize();
self.mask = new Element('span.mask', {
'styles': {
'position': 'relative',
'width': s.x,
'height': s.y,
'top': -s.y,
'display': 'block'
}
}).inject(self.options).fade('hide')
self.mask = new Element('span.mask').inject(self.el).fade('hide')
createSpinner(self.mask)
self.mask.fade('in')

View File

@@ -7,6 +7,7 @@ from couchpotato.core.helpers.variable import mergeDicts, md5, getExt
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.settings.model import Quality, Profile, ProfileType
from sqlalchemy.sql.expression import or_
import os.path
import re
import time
@@ -18,15 +19,15 @@ class QualityPlugin(Plugin):
qualities = [
{'identifier': 'bd50', 'hd': True, 'size': (15000, 60000), 'label': 'BR-Disk', 'alternative': ['bd25'], 'allow': ['1080p'], 'ext':[], 'tags': ['bdmv', 'certificate', ('complete', 'bluray')]},
{'identifier': '1080p', 'hd': True, 'size': (5000, 20000), 'label': '1080P', 'width': 1920, 'alternative': [], 'allow': [], 'ext':['mkv', 'm2ts'], 'tags': ['m2ts']},
{'identifier': '720p', 'hd': True, 'size': (3500, 10000), 'label': '720P', 'width': 1280, 'alternative': [], 'allow': [], 'ext':['mkv', 'ts']},
{'identifier': '1080p', 'hd': True, 'size': (5000, 20000), 'label': '1080P', 'width': 1920, 'height': 1080, 'alternative': [], 'allow': [], 'ext':['mkv', 'm2ts'], 'tags': ['m2ts']},
{'identifier': '720p', 'hd': True, 'size': (3500, 10000), 'label': '720P', 'width': 1280, 'height': 720, 'alternative': [], 'allow': [], 'ext':['mkv', 'ts']},
{'identifier': 'brrip', 'hd': True, 'size': (700, 7000), 'label': 'BR-Rip', 'alternative': ['bdrip'], 'allow': ['720p'], 'ext':['avi']},
{'identifier': 'dvdr', 'size': (3000, 10000), 'label': 'DVD-R', 'alternative': [], 'allow': [], 'ext':['iso', 'img'], 'tags': ['pal', 'ntsc', 'video_ts', 'audio_ts']},
{'identifier': 'dvdrip', 'size': (600, 2400), 'label': 'DVD-Rip', 'width': 720, 'alternative': ['dvdrip'], 'allow': [], 'ext':['avi', 'mpg', 'mpeg']},
{'identifier': 'dvdrip', 'size': (600, 2400), 'label': 'DVD-Rip', 'width': 720, 'alternative': ['dvdrip'], 'allow': [], 'ext':['avi', 'mpg', 'mpeg'], 'tags': [('dvd', 'rip'), ('dvd', 'xvid'), ('dvd', 'divx')]},
{'identifier': 'scr', 'size': (600, 1600), 'label': 'Screener', 'alternative': ['screener', 'dvdscr', 'ppvrip'], 'allow': ['dvdr', 'dvd'], 'ext':['avi', 'mpg', 'mpeg']},
{'identifier': 'r5', 'size': (600, 1000), 'label': 'R5', 'alternative': [], 'allow': ['dvdr'], 'ext':['avi', 'mpg', 'mpeg']},
{'identifier': 'tc', 'size': (600, 1000), 'label': 'TeleCine', 'alternative': ['telecine'], 'allow': [], 'ext':['avi', 'mpg', 'mpeg']},
{'identifier': 'ts', 'size': (600, 1000), 'label': 'TeleSync', 'alternative': ['telesync'], 'allow': [], 'ext':['avi', 'mpg', 'mpeg']},
{'identifier': 'ts', 'size': (600, 1000), 'label': 'TeleSync', 'alternative': ['telesync', 'hdts'], 'allow': [], 'ext':['avi', 'mpg', 'mpeg']},
{'identifier': 'cam', 'size': (600, 1000), 'label': 'Cam', 'alternative': ['camrip', 'hdcam'], 'allow': [], 'ext':['avi', 'mpg', 'mpeg']}
]
pre_releases = ['cam', 'ts', 'tc', 'r5', 'scr']
@@ -76,7 +77,7 @@ class QualityPlugin(Plugin):
db = get_session()
quality_dict = {}
quality = db.query(Quality).filter_by(identifier = identifier).first()
quality = db.query(Quality).filter(or_(Quality.identifier == identifier, Quality.id == identifier)).first()
if quality:
quality_dict = dict(self.getQuality(quality.identifier), **quality.to_dict())
@@ -198,9 +199,14 @@ class QualityPlugin(Plugin):
for quality in self.all():
# Last check on resolution only
if quality.get('width', 480) == extra.get('resolution_width', 0):
log.debug('Found %s via resolution_width: %s == %s', (quality['identifier'], quality.get('width', 480), extra.get('resolution_width', 0)))
# Check width resolution, range 20
if (quality.get('width', 720) - 20) <= extra.get('resolution_width', 0) <= (quality.get('width', 720) + 20):
log.debug('Found %s via resolution_width: %s == %s', (quality['identifier'], quality.get('width', 720), extra.get('resolution_width', 0)))
return self.setCache(hash, quality)
# Check height resolution, range 20
if (quality.get('height', 480) - 20) <= extra.get('resolution_height', 0) <= (quality.get('height', 480) + 20):
log.debug('Found %s via resolution_height: %s == %s', (quality['identifier'], quality.get('height', 480), extra.get('resolution_height', 0)))
return self.setCache(hash, quality)
if 480 <= extra.get('resolution_width', 0) <= 720:

View File

@@ -133,6 +133,9 @@ class Release(Plugin):
db.delete(release_file)
db.commit()
if len(rel.files) == 0:
self.delete(id)
return True
return False

View File

@@ -82,6 +82,15 @@ config = [{
'unit': 'min(s)',
'description': 'Detect movie status every X minutes. Will start the renamer if movie is <strong>completed</strong> or handle <strong>failed</strong> download if these options are enabled',
},
{
'advanced': True,
'name': 'force_every',
'label': 'Force every',
'default': 2,
'type': 'int',
'unit': 'hour(s)',
'description': 'Forces the renamer to scan every X hours',
},
{
'advanced': True,
'name': 'next_on_failed',
@@ -124,13 +133,6 @@ config = [{
'type': 'choice',
'options': rename_options
},
{
'name': 'trailer_name',
'label': 'Trailer naming',
'default': '<filename>-trailer.<ext>',
'type': 'choice',
'options': rename_options
},
],
},
],

View File

@@ -33,9 +33,13 @@ class Renamer(Plugin):
addEvent('renamer.check_snatched', self.checkSnatched)
addEvent('app.load', self.scan)
addEvent('app.load', self.checkSnatched)
fireEvent('schedule.interval', 'renamer.check_snatched', self.checkSnatched, minutes = self.conf('run_every'))
fireEvent('schedule.interval', 'renamer.check_snatched_forced', self.scan, hours = 2)
if self.conf('run_every') > 0:
fireEvent('schedule.interval', 'renamer.check_snatched', self.checkSnatched, minutes = self.conf('run_every'))
if self.conf('force_every') > 0:
fireEvent('schedule.interval', 'renamer.check_snatched_forced', self.scan, hours = self.conf('force_every'))
def scanView(self):
@@ -310,7 +314,10 @@ class Renamer(Plugin):
elif release.status_id is snatched_status.get('id'):
if release.quality.id is group['meta_data']['quality']['id']:
log.debug('Marking release as downloaded')
release.status_id = downloaded_status.get('id')
try:
release.status_id = downloaded_status.get('id')
except Exception, e:
log.error('Failed marking release as finished: %s %s', (e, traceback.format_exc()))
db.commit()
# Remove leftover files
@@ -334,6 +341,7 @@ class Renamer(Plugin):
log.info('Removing "%s"', src)
try:
src = ss(src)
if os.path.isfile(src):
os.remove(src)
@@ -347,7 +355,10 @@ class Renamer(Plugin):
# Delete leftover folder from older releases
for delete_folder in delete_folders:
self.deleteEmptyFolder(delete_folder, show_error = False)
try:
self.deleteEmptyFolder(delete_folder, show_error = False)
except Exception, e:
log.error('Failed to delete folder: %s %s', (e, traceback.format_exc()))
# Rename all files marked
group['renamed_files'] = []
@@ -383,7 +394,10 @@ class Renamer(Plugin):
# Notify on download, search for trailers etc
download_message = 'Downloaded %s (%s)' % (movie_title, replacements['quality'])
fireEvent('renamer.after', message = download_message, group = group, in_order = True)
try:
fireEvent('renamer.after', message = download_message, group = group, in_order = True)
except:
log.error('Failed firing (some) of the renamer.after events: %s', traceback.format_exc())
# Break if CP wants to shut down
if self.shuttingDown():
@@ -485,6 +499,7 @@ class Renamer(Plugin):
return string.replace(' ', ' ').replace(' .', '.')
def deleteEmptyFolder(self, folder, show_error = True):
folder = ss(folder)
loge = log.error if show_error else log.debug
for root, dirs, files in os.walk(folder):

View File

@@ -23,7 +23,7 @@ class Scanner(Plugin):
'media': 314572800, # 300MB
'trailer': 1048576, # 1MB
}
ignored_in_path = ['_unpack', '_failed_', '_unknown_', '_exists_', '_failed_remove_', '_failed_rename_', '.appledouble', '.appledb', '.appledesktop', os.path.sep + '._', '.ds_store', 'cp.cpnfo'] #unpacking, smb-crap, hidden files
ignored_in_path = ['extracting', '_unpack', '_failed_', '_unknown_', '_exists_', '_failed_remove_', '_failed_rename_', '.appledouble', '.appledb', '.appledesktop', os.path.sep + '._', '.ds_store', 'cp.cpnfo'] #unpacking, smb-crap, hidden files
ignore_names = ['extract', 'extracting', 'extracted', 'movie', 'movies', 'film', 'films', 'download', 'downloads', 'video_ts', 'audio_ts', 'bdmv', 'certificate']
extensions = {
'movie': ['mkv', 'wmv', 'avi', 'mpg', 'mpeg', 'mp4', 'm2ts', 'iso', 'img', 'mdf', 'ts', 'm4v'],
@@ -53,6 +53,20 @@ class Scanner(Plugin):
'video': ['x264', 'h264', 'divx', 'xvid']
}
audio_codec_map = {
0x2000: 'ac3',
0x2001: 'dts',
0x0055: 'mp3',
0x0050: 'mp2',
0x0001: 'pcm',
0x003: 'pcm',
0x77a1: 'tta1',
0x5756: 'wav',
0x6750: 'vorbis',
0xF1AC: 'flac',
0x00ff: 'aac',
}
source_media = {
'bluray': ['bluray', 'blu-ray', 'brrip', 'br-rip'],
'hddvd': ['hddvd', 'hd-dvd'],
@@ -75,7 +89,7 @@ class Scanner(Plugin):
'()([ab])(\.....?)$' #*a.mkv
]
cp_imdb = '(\.cp\((?P<id>tt[0-9{7}]+)\))'
cp_imdb = '(.cp.(?P<id>tt[0-9{7}]+).)'
def __init__(self):
@@ -327,11 +341,11 @@ class Scanner(Plugin):
group['files']['movie'] = self.getMediaFiles(group['unsorted_files'])
if len(group['files']['movie']) == 0:
log.error('Couldn\t find any movie files for %s', identifier)
log.error('Couldn\'t find any movie files for %s', identifier)
continue
log.debug('Getting metadata for %s', identifier)
group['meta_data'] = self.getMetaData(group)
group['meta_data'] = self.getMetaData(group, folder = folder)
# Subtitle meta
group['subtitle_language'] = self.getSubtitleLanguage(group) if not simple else {}
@@ -381,7 +395,7 @@ class Scanner(Plugin):
return processed_movies
def getMetaData(self, group):
def getMetaData(self, group, folder = ''):
data = {}
files = list(group['files']['movie'])
@@ -407,10 +421,10 @@ class Scanner(Plugin):
if not data['quality']:
data['quality'] = fireEvent('quality.single', 'dvdr' if group['is_dvd'] else 'dvdrip', single = True)
data['quality_type'] = 'HD' if data.get('resolution_width', 0) >= 1280 else 'SD'
data['quality_type'] = 'HD' if data.get('resolution_width', 0) >= 1280 or data['quality'].get('hd') else 'SD'
filename = re.sub('(.cp\(tt[0-9{7}]+\))', '', files[0])
data['group'] = self.getGroup(filename)
data['group'] = self.getGroup(filename[len(folder):])
data['source'] = self.getSourceMedia(filename)
return data
@@ -419,9 +433,18 @@ class Scanner(Plugin):
try:
p = enzyme.parse(filename)
# Video codec
vc = ('h264' if p.video[0].codec == 'AVC1' else p.video[0].codec).lower()
# Audio codec
ac = p.audio[0].codec
try: ac = self.audio_codec_map.get(p.audio[0].codec)
except: pass
return {
'video': p.video[0].codec,
'audio': p.audio[0].codec,
'video': vc,
'audio': ac,
'resolution_width': tryInt(p.video[0].width),
'resolution_height': tryInt(p.video[0].height),
}
@@ -738,8 +761,8 @@ class Scanner(Plugin):
def getGroup(self, file):
try:
match = re.search('-(?P<group>[A-Z0-9]+).', file, re.I)
return match.group('group') or ''
match = re.findall('\-([A-Z0-9]+)[\.\/]', file, re.I)
return match[-1] or ''
except:
return ''
@@ -752,7 +775,7 @@ class Scanner(Plugin):
return None
def findYear(self, text):
matches = re.search('(?P<year>[12]{1}[0-9]{3})', text)
matches = re.search('(?P<year>19[0-9]{2}|20[0-9]{2})', text)
if matches:
return matches.group('year')

View File

@@ -27,9 +27,9 @@ class Score(Plugin):
score += sizeScore(nzb['size'])
# Torrents only
if nzb.get('seeds'):
if nzb.get('seeders'):
try:
score += nzb.get('seeds') / 5
score += nzb.get('seeders') / 5
score += nzb.get('leechers') / 10
except:
pass

View File

@@ -116,7 +116,7 @@ def sizeScore(size):
def providerScore(provider):
if provider in ['NZBMatrix', 'Nzbs', 'Newzbin']:
if provider in ['OMGWTFNZBs', 'PassThePopcorn', 'SceneAccess', 'TorrentLeech']:
return 20
if provider in ['Newznab']:

View File

@@ -24,18 +24,19 @@ config = [{
'name': 'required_words',
'label': 'Required words',
'default': '',
'description': 'Ignore releases that don\'t contain at least one of these words.'
'placeholder': 'Example: DTS, AC3 & English',
'description': 'Ignore releases that don\'t contain at least one set of words. Sets are separated by "," and each word within a set must be separated with "&"'
},
{
'name': 'ignored_words',
'label': 'Ignored words',
'default': 'german, dutch, french, truefrench, danish, swedish, spanish, italian, korean, dubbed, swesub, korsub',
'default': 'german, dutch, french, truefrench, danish, swedish, spanish, italian, korean, dubbed, swesub, korsub, dksubs',
},
{
'name': 'preferred_method',
'label': 'First search',
'description': 'Which of the methods do you prefer',
'default': 'nzb',
'default': 'both',
'type': 'dropdown',
'values': [('usenet & torrents', 'both'), ('usenet', 'nzb'), ('torrents', 'torrent')],
},

View File

@@ -3,7 +3,8 @@ from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.helpers.encoding import simplifyString, toUnicode
from couchpotato.core.helpers.request import jsonified, getParam
from couchpotato.core.helpers.variable import md5, getTitle
from couchpotato.core.helpers.variable import md5, getTitle, splitString, \
possibleTitles
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.settings.model import Movie, Release, ReleaseInfo
@@ -204,6 +205,10 @@ class Searcher(Plugin):
for nzb in sorted_results:
if not quality_type.get('finish', False) and quality_type.get('wait_for', 0) > 0 and nzb.get('age') <= quality_type.get('wait_for', 0):
log.info('Ignored, waiting %s days: %s', (quality_type.get('wait_for'), nzb['name']))
continue
if nzb['status_id'] == ignored_status.get('id'):
log.info('Ignored: %s', nzb['name'])
continue
@@ -239,7 +244,7 @@ class Searcher(Plugin):
filedata = None
if data.get('download') and (ismethod(data.get('download')) or isfunction(data.get('download'))):
filedata = data.get('download')(url = data.get('url'), nzb_id = data.get('id'))
if filedata is 'try_next':
if filedata == 'try_next':
return filedata
successful = fireEvent('download', data = data, movie = movie, manual = manual, filedata = filedata, single = True)
@@ -293,49 +298,54 @@ class Searcher(Plugin):
imdb_results = kwargs.get('imdb_results', False)
retention = Env.setting('retention', section = 'nzb')
if nzb.get('seeds') is None and 0 < retention < nzb.get('age', 0):
log.info('Wrong: Outside retention, age is %s, needs %s or lower: %s', (nzb['age'], retention, nzb['name']))
if nzb.get('seeders') is None and 0 < retention < nzb.get('age', 0):
log.info2('Wrong: Outside retention, age is %s, needs %s or lower: %s', (nzb['age'], retention, nzb['name']))
return False
movie_name = getTitle(movie['library'])
movie_words = re.split('\W+', simplifyString(movie_name))
nzb_name = simplifyString(nzb['name'])
nzb_words = re.split('\W+', nzb_name)
required_words = [x.strip().lower() for x in self.conf('required_words').lower().split(',')]
required_words = splitString(self.conf('required_words').lower())
if self.conf('required_words') and not list(set(nzb_words) & set(required_words)):
log.info("Wrong: Required word missing: %s" % nzb['name'])
req_match = 0
for req_set in required_words:
req = splitString(req_set, '&')
req_match += len(list(set(nzb_words) & set(req))) == len(req)
if self.conf('required_words') and req_match == 0:
log.info2("Wrong: Required word missing: %s" % nzb['name'])
return False
ignored_words = [x.strip().lower() for x in self.conf('ignored_words').split(',')]
blacklisted = list(set(nzb_words) & set(ignored_words))
ignored_words = splitString(self.conf('ignored_words').lower())
blacklisted = list(set(nzb_words) & set(ignored_words) - set(movie_words))
if self.conf('ignored_words') and blacklisted:
log.info("Wrong: '%s' blacklisted words: %s" % (nzb['name'], ", ".join(blacklisted)))
log.info2("Wrong: '%s' blacklisted words: %s" % (nzb['name'], ", ".join(blacklisted)))
return False
pron_tags = ['xxx', 'sex', 'anal', 'tits', 'fuck', 'porn', 'orgy', 'milf', 'boobs', 'erotica', 'erotic']
for p_tag in pron_tags:
if p_tag in nzb_words and p_tag not in movie_words:
log.info('Wrong: %s, probably pr0n', (nzb['name']))
return False
pron_words = list(set(nzb_words) & set(pron_tags) - set(movie_words))
if pron_words:
log.info('Wrong: %s, probably pr0n', (nzb['name']))
return False
#qualities = fireEvent('quality.all', single = True)
preferred_quality = fireEvent('quality.single', identifier = quality['identifier'], single = True)
# Contains lower quality string
if self.containsOtherQuality(nzb, movie_year = movie['library']['year'], preferred_quality = preferred_quality):
log.info('Wrong: %s, looking for %s', (nzb['name'], quality['label']))
log.info2('Wrong: %s, looking for %s', (nzb['name'], quality['label']))
return False
# File to small
if nzb['size'] and preferred_quality['size_min'] > nzb['size']:
log.info('"%s" is too small to be %s. %sMB instead of the minimal of %sMB.', (nzb['name'], preferred_quality['label'], nzb['size'], preferred_quality['size_min']))
log.info2('Wrong: "%s" is too small to be %s. %sMB instead of the minimal of %sMB.', (nzb['name'], preferred_quality['label'], nzb['size'], preferred_quality['size_min']))
return False
# File to large
if nzb['size'] and preferred_quality.get('size_max') < nzb['size']:
log.info('"%s" is too large to be %s. %sMB instead of the maximum of %sMB.', (nzb['name'], preferred_quality['label'], nzb['size'], preferred_quality['size_max']))
log.info2('Wrong: "%s" is too large to be %s. %sMB instead of the maximum of %sMB.', (nzb['name'], preferred_quality['label'], nzb['size'], preferred_quality['size_max']))
return False
@@ -353,20 +363,21 @@ class Searcher(Plugin):
return True
# Check if nzb contains imdb link
if self.checkIMDB([nzb['description']], movie['library']['identifier']):
if self.checkIMDB([nzb.get('description', '')], movie['library']['identifier']):
return True
for movie_title in movie['library']['titles']:
movie_words = re.split('\W+', simplifyString(movie_title['title']))
for raw_title in movie['library']['titles']:
for movie_title in possibleTitles(raw_title['title']):
movie_words = re.split('\W+', simplifyString(movie_title))
if self.correctName(nzb['name'], movie_title['title']):
# if no IMDB link, at least check year range 1
if len(movie_words) > 2 and self.correctYear([nzb['name']], movie['library']['year'], 1):
return True
if self.correctName(nzb['name'], movie_title):
# if no IMDB link, at least check year range 1
if len(movie_words) > 2 and self.correctYear([nzb['name']], movie['library']['year'], 1):
return True
# if no IMDB link, at least check year
if len(movie_words) <= 2 and self.correctYear([nzb['name']], movie['library']['year'], 0):
return True
# if no IMDB link, at least check year
if len(movie_words) <= 2 and self.correctYear([nzb['name']], movie['library']['year'], 0):
return True
log.info("Wrong: %s, undetermined naming. Looking for '%s (%s)'" % (nzb['name'], movie_name, movie['library']['year']))
return False
@@ -389,9 +400,14 @@ class Searcher(Plugin):
if list(set(nzb_words) & set(quality['alternative'])):
found[quality['identifier']] = True
# Try guessing via quality tags
guess = fireEvent('quality.guess', [nzb.get('name')], single = True)
if guess:
found[guess['identifier']] = True
# Hack for older movies that don't contain quality tag
year_name = fireEvent('scanner.name_year', name, single = True)
if movie_year < datetime.datetime.now().year - 3 and not year_name.get('year', None):
if len(found) == 0 and movie_year < datetime.datetime.now().year - 3 and not year_name.get('year', None):
if size > 3000: # Assume dvdr
log.info('Quality was missing in name, assuming it\'s a DVD-R based on the size: %s', (size))
found['dvdr'] = True
@@ -430,12 +446,16 @@ class Searcher(Plugin):
def correctName(self, check_name, movie_name):
check_names = [check_name]
try:
check_names.append(re.search(r'([\'"])[^\1]*\1', check_name).group(0))
except:
pass
for check_name in check_names:
# Match names between "
try: check_names.append(re.search(r'([\'"])[^\1]*\1', check_name).group(0))
except: pass
# Match longest name between []
try: check_names.append(max(check_name.split('['), key = len))
except: pass
for check_name in list(set(check_names)):
check_movie = fireEvent('scanner.name_year', check_name, single = True)
try:

View File

@@ -22,6 +22,7 @@ class StatusPlugin(Plugin):
'failed': 'Failed',
'deleted': 'Deleted',
'ignored': 'Ignored',
'available': 'Available',
}
def __init__(self):

View File

@@ -24,6 +24,13 @@ config = [{
'type': 'dropdown',
'values': [('1080P', '1080p'), ('720P', '720p'), ('480P', '480p')],
},
{
'name': 'name',
'label': 'Naming',
'default': '<filename>-trailer',
'advanced': True,
'description': 'Use <filename> to use above settings.'
},
],
},
],

View File

@@ -19,10 +19,11 @@ class Trailer(Plugin):
trailers = fireEvent('trailer.search', group = group, merge = True)
if not trailers or trailers == []:
log.info('No trailers found for: %s', getTitle(group['library']))
return
return False
for trailer in trailers.get(self.conf('quality'), []):
destination = '%s-trailer.%s' % (self.getRootName(group), getExt(trailer))
filename = self.conf('name').replace('<filename>', group['filename']) + ('.%s' % getExt(trailer))
destination = os.path.join(group['destination_dir'], filename)
if not os.path.isfile(destination):
fireEvent('file.download', url = trailer, dest = destination, urlopen_kwargs = {'headers': {'User-Agent': 'Quicktime'}}, single = True)
else:
@@ -33,5 +34,5 @@ class Trailer(Plugin):
# Download first and break
break
def getRootName(self, data = {}):
return os.path.join(data['destination_dir'], data['filename'])
return True

View File

@@ -21,6 +21,7 @@ class Userscript(Plugin):
addApiView('userscript.get/<random>/<path:filename>', self.getUserScript, static = True)
addApiView('userscript', self.iFrame)
addApiView('userscript.add_via_url', self.getViaUrl)
addApiView('userscript.includes', self.getIncludes)
addApiView('userscript.bookmark', self.bookmark)
addEvent('userscript.get_version', self.getVersion)
@@ -35,6 +36,13 @@ class Userscript(Plugin):
return self.renderTemplate(__file__, 'bookmark.js', **params)
def getIncludes(self):
return jsonified({
'includes': fireEvent('userscript.get_includes', merge = True),
'excludes': fireEvent('userscript.get_excludes', merge = True),
})
def getUserScript(self, random = '', filename = ''):
params = {

View File

@@ -9,7 +9,7 @@ Page.Wizard = new Class({
headers: {
'welcome': {
'title': 'Welcome to the new CouchPotato',
'description': 'To get started, fill in each of the following settings as much as your can. <br />Maybe first start with importing your movies from the previous CouchPotato',
'description': 'To get started, fill in each of the following settings as much as you can. <br />Maybe first start with importing your movies from the previous CouchPotato',
'content': new Element('div', {
'styles': {
'margin': '0 0 0 30px'
@@ -37,7 +37,7 @@ Page.Wizard = new Class({
},
'downloaders': {
'title': 'What download apps are you using?',
'description': 'CP needs an external download app to work with. Choose one below. For more downloaders check settings after you have filled in the wizard. If your download app isn\'t in the list, use Blackhole.'
'description': 'CP needs an external download app to work with. Choose one below. For more downloaders check settings after you have filled in the wizard. If your download app isn\'t in the list, use the default Blackhole.'
},
'providers': {
'title': 'Are you registered at any of these sites?',

View File

@@ -1,13 +1,13 @@
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.core.providers.base import Provider
from couchpotato.environment import Env
import time
log = CPLog(__name__)
class Automation(Plugin):
class Automation(Provider):
enabled_option = 'automation_enabled'
@@ -19,6 +19,9 @@ class Automation(Plugin):
def _getMovies(self):
if self.isDisabled():
return
if not self.canCheck():
log.debug('Just checked, skipping %s', self.getName())
return []

View File

@@ -1,8 +1,7 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5, tryInt
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -14,32 +13,24 @@ class Bluray(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
cache_key = 'bluray.%s' % md5(self.rss_url)
rss_data = self.getCache(cache_key, self.rss_url)
data = XMLTree.fromstring(rss_data)
rss_movies = self.getRSSData(self.rss_url)
if data is not None:
rss_movies = self.getElements(data, 'channel/item')
for movie in rss_movies:
name = self.getTextElement(movie, 'title').lower().split('blu-ray')[0].strip('(').rstrip()
year = self.getTextElement(movie, 'description').split('|')[1].strip('(').strip()
for movie in rss_movies:
name = self.getTextElement(movie, "title").lower().split("blu-ray")[0].strip("(").rstrip()
year = self.getTextElement(movie, "description").split("|")[1].strip("(").strip()
if not name.find('/') == -1: # make sure it is not a double movie release
continue
if not name.find("/") == -1: # make sure it is not a double movie release
continue
if tryInt(year) < self.getMinimal('year'):
continue
if tryInt(year) < self.getMinimal('year'):
continue
imdb = self.search(name, year)
imdb = self.search(name, year)
if imdb:
if self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
if imdb:
if self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
return movies

View File

@@ -8,7 +8,4 @@ class CP(Automation):
def getMovies(self):
if self.isDisabled():
return
return []

View File

@@ -1,9 +1,8 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5, getImdb
from couchpotato.core.helpers.variable import getImdb, splitString, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -14,35 +13,26 @@ class IMDB(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
enablers = self.conf('automation_urls_use').split(',')
enablers = [tryInt(x) for x in splitString(self.conf('automation_urls_use'))]
urls = splitString(self.conf('automation_urls'))
index = -1
for rss_url in self.conf('automation_urls').split(','):
for url in urls:
index += 1
if not enablers[index]:
continue
elif 'rss.imdb' not in rss_url:
log.error('This isn\'t the correct url.: %s', rss_url)
continue
try:
cache_key = 'imdb.rss.%s' % md5(rss_url)
rss_data = self.getHTMLData(url)
imdbs = getImdb(rss_data, multiple = True)
rss_data = self.getCache(cache_key, rss_url)
data = XMLTree.fromstring(rss_data)
rss_movies = self.getElements(data, 'channel/item')
for movie in rss_movies:
imdb = getImdb(self.getTextElement(movie, "link"))
for imdb in imdbs:
movies.append(imdb)
except:
log.error('Failed loading IMDB watchlist: %s %s', (rss_url, traceback.format_exc()))
log.error('Failed loading IMDB watchlist: %s %s', (url, traceback.format_exc()))
return movies

View File

@@ -0,0 +1,35 @@
from .main import ITunes
def start():
return ITunes()
config = [{
'name': 'itunes',
'groups': [
{
'tab': 'automation',
'name': 'itunes_automation',
'label': 'iTunes',
'description': 'From any <a href="http://itunes.apple.com/rss">iTunes</a> Store feed. Url should be the RSS link. (uses minimal requirements)',
'options': [
{
'name': 'automation_enabled',
'default': False,
'type': 'enabler',
},
{
'name': 'automation_urls_use',
'label': 'Use',
'default': ',',
},
{
'name': 'automation_urls',
'label': 'url',
'type': 'combined',
'combine': ['automation_urls_use', 'automation_urls'],
'default': 'https://itunes.apple.com/rss/topmovies/limit=25/xml,',
},
],
},
],
}]

View File

@@ -0,0 +1,63 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5, splitString, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
from xml.etree.ElementTree import QName
import datetime
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
class ITunes(Automation, RSS):
interval = 1800
def getIMDBids(self):
if self.isDisabled():
return
movies = []
enablers = [tryInt(x) for x in splitString(self.conf('automation_urls_use'))]
urls = splitString(self.conf('automation_urls'))
namespace = 'http://www.w3.org/2005/Atom'
namespaceIM = 'http://itunes.apple.com/rss'
index = -1
for url in urls:
index += 1
if not enablers[index]:
continue
try:
cache_key = 'itunes.rss.%s' % md5(url)
rss_data = self.getCache(cache_key, url)
data = XMLTree.fromstring(rss_data)
if data is not None:
entry_tag = str(QName(namespace, 'entry'))
rss_movies = self.getElements(data, entry_tag)
for movie in rss_movies:
name_tag = str(QName(namespaceIM, 'name'))
name = self.getTextElement(movie, name_tag)
releaseDate_tag = str(QName(namespaceIM, 'releaseDate'))
releaseDateText = self.getTextElement(movie, releaseDate_tag)
year = datetime.datetime.strptime(releaseDateText, '%Y-%m-%dT00:00:00-07:00').strftime("%Y")
imdb = self.search(name, year)
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
except:
log.error('Failed loading iTunes rss feed: %s %s', (url, traceback.format_exc()))
return movies

View File

@@ -1,9 +1,7 @@
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import datetime
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -15,25 +13,17 @@ class Kinepolis(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
cache_key = 'kinepolis.%s' % md5(self.rss_url)
rss_data = self.getCache(cache_key, self.rss_url)
data = XMLTree.fromstring(rss_data)
rss_movies = self.getRSSData(self.rss_url)
if data is not None:
rss_movies = self.getElements(data, 'channel/item')
for movie in rss_movies:
name = self.getTextElement(movie, 'title')
year = datetime.datetime.now().strftime('%Y')
for movie in rss_movies:
name = self.getTextElement(movie, "title")
year = datetime.datetime.now().strftime("%Y")
imdb = self.search(name, year)
imdb = self.search(name, year)
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
return movies

View File

@@ -0,0 +1,23 @@
from .main import Moviemeter
def start():
return Moviemeter()
config = [{
'name': 'moviemeter',
'groups': [
{
'tab': 'automation',
'name': 'moviemeter_automation',
'label': 'Moviemeter',
'description': 'Imports movies from the current top 10 of moviemeter.nl. (uses minimal requirements)',
'options': [
{
'name': 'automation_enabled',
'default': False,
'type': 'enabler',
},
],
},
],
}]

View File

@@ -0,0 +1,28 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
log = CPLog(__name__)
class Moviemeter(Automation, RSS):
interval = 1800
rss_url = 'http://www.moviemeter.nl/rss/cinema'
def getIMDBids(self):
movies = []
rss_movies = self.getRSSData(self.rss_url)
for movie in rss_movies:
name_year = fireEvent('scanner.name_year', self.getTextElement(movie, 'title'), single = True)
imdb = self.search(name_year.get('name'), name_year.get('year'))
if imdb and self.isMinimalMovie(imdb):
movies.append(imdb['imdb'])
return movies

View File

@@ -1,11 +1,8 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import md5
from couchpotato.core.helpers.variable import tryInt, splitString
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
from xml.etree.ElementTree import ParseError
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -16,39 +13,27 @@ class MoviesIO(Automation, RSS):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
enablers = self.conf('automation_urls_use').split(',')
enablers = [tryInt(x) for x in splitString(self.conf('automation_urls_use'))]
index = -1
for rss_url in self.conf('automation_urls').split(','):
for rss_url in splitString(self.conf('automation_urls')):
index += 1
if not enablers[index]:
continue
try:
cache_key = 'imdb.rss.%s' % md5(rss_url)
rss_movies = self.getRSSData(rss_url, headers = {'Referer': ''})
rss_data = self.getCache(cache_key, rss_url, headers = {'Referer': ''})
data = XMLTree.fromstring(rss_data)
rss_movies = self.getElements(data, 'channel/item')
for movie in rss_movies:
for movie in rss_movies:
nameyear = fireEvent('scanner.name_year', self.getTextElement(movie, 'title'), single = True)
imdb = self.search(nameyear.get('name'), nameyear.get('year'), imdb_only = True)
nameyear = fireEvent('scanner.name_year', self.getTextElement(movie, "title"), single = True)
imdb = self.search(nameyear.get('name'), nameyear.get('year'), imdb_only = True)
if not imdb:
continue
if not imdb:
continue
movies.append(imdb)
except ParseError:
log.debug('Failed loading Movies.io watchlist, probably empty: %s', (rss_url))
except:
log.error('Failed loading Movies.io watchlist: %s %s', (rss_url, traceback.format_exc()))
movies.append(imdb)
return movies

View File

@@ -1,9 +1,8 @@
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.variable import md5, sha1
from couchpotato.core.helpers.variable import sha1
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.automation.base import Automation
import base64
import json
log = CPLog(__name__)
@@ -25,9 +24,6 @@ class Trakt(Automation):
def getIMDBids(self):
if self.isDisabled():
return
movies = []
for movie in self.getWatchlist():
movies.append(movie.get('imdb_id'))
@@ -38,22 +34,11 @@ class Trakt(Automation):
method = (self.urls['watchlist'] % self.conf('automation_api_key')) + self.conf('automation_username')
return self.call(method)
def call(self, method_url):
try:
if self.conf('automation_password'):
headers = {
'Authorization': 'Basic %s' % base64.encodestring('%s:%s' % (self.conf('automation_username'), self.conf('automation_password')))[:-1]
}
else:
headers = {}
headers = {}
if self.conf('automation_password'):
headers['Authorization'] = 'Basic %s' % base64.encodestring('%s:%s' % (self.conf('automation_username'), self.conf('automation_password')))[:-1]
cache_key = 'trakt.%s' % md5(method_url)
json_string = self.getCache(cache_key, self.urls['base'] + method_url, headers = headers)
if json_string:
return json.loads(json_string)
except:
log.error('Failed to get data from trakt, check your login.')
return []
data = self.getJsonData(self.urls['base'] + method_url, headers = headers)
return data if data else []

View File

@@ -1,11 +1,17 @@
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.variable import tryFloat
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.variable import tryFloat, mergeDicts, md5, \
possibleTitles, getTitle
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
from urlparse import urlparse
import cookielib
import json
import re
import time
import traceback
import urllib2
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -38,6 +44,34 @@ class Provider(Plugin):
return self.is_available.get(host, False)
def getJsonData(self, url, **kwargs):
data = self.getCache(md5(url), url, **kwargs)
if data:
try:
return json.loads(data)
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return []
def getRSSData(self, url, **kwargs):
data = self.getCache(md5(url), url, **kwargs)
if data:
try:
data = XMLTree.fromstring(data)
return self.getElements(data, 'channel/item')
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return []
def getHTMLData(self, url, **kwargs):
return self.getCache(md5(url), url, **kwargs)
class YarrProvider(Provider):
@@ -47,22 +81,76 @@ class YarrProvider(Provider):
sizeMb = ['mb', 'mib']
sizeKb = ['kb', 'kib']
login_opener = None
def __init__(self):
addEvent('provider.belongs_to', self.belongsTo)
addEvent('%s.search' % self.type, self.search)
addEvent('yarr.search', self.search)
addEvent('nzb.feed', self.feed)
def login(self):
try:
cookiejar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
urllib2.install_opener(opener)
log.info2('Logging into %s', self.urls['login'])
f = opener.open(self.urls['login'], self.getLoginParams())
output = f.read()
f.close()
if self.loginSuccess(output):
self.login_opener = opener
return True
except:
log.error('Failed to login %s: %s', (self.getName(), traceback.format_exc()))
return False
def loginSuccess(self, output):
return True
def loginDownload(self, url = '', nzb_id = ''):
try:
if not self.login_opener and not self.login():
log.error('Failed downloading from %s', self.getName())
return self.urlopen(url, opener = self.login_opener)
except:
log.error('Failed downloading from %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return ''
def download(self, url = '', nzb_id = ''):
return self.urlopen(url)
try:
return self.urlopen(url, headers = {'User-Agent': Env.getIdentifier()}, show_error = False)
except:
log.error('Failed getting nzb from %s: %s', (self.getName(), traceback.format_exc()))
def feed(self):
return []
return 'try_next'
def search(self, movie, quality):
return []
if self.isDisabled():
return []
# Login if needed
if self.urls.get('login') and (not self.login_opener and not self.login()):
log.error('Failed to login to: %s', self.getName())
return []
# Create result container
imdb_results = hasattr(self, '_search')
results = ResultList(self, movie, quality, imdb_results = imdb_results)
# Do search based on imdb id
if imdb_results:
self._search(movie, quality, results)
# Search possible titles
else:
for title in possibleTitles(getTitle(movie['library'])):
self._searchOnTitle(title, movie, quality, results)
return results
def belongsTo(self, url, provider = None, host = None):
try:
@@ -110,5 +198,65 @@ class YarrProvider(Provider):
return [self.cat_backup_id]
def found(self, new):
log.info('Found: score(%(score)s) on %(provider)s: %(name)s', new)
class ResultList(list):
result_ids = None
provider = None
movie = None
quality = None
def __init__(self, provider, movie, quality, **kwargs):
self.result_ids = []
self.provider = provider
self.movie = movie
self.quality = quality
self.kwargs = kwargs
super(ResultList, self).__init__()
def extend(self, results):
for r in results:
self.append(r)
def append(self, result):
new_result = self.fillResult(result)
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new_result, movie = self.movie, quality = self.quality,
imdb_results = self.kwargs.get('imdb_results', False), single = True)
if is_correct_movie and new_result['id'] not in self.result_ids:
new_result['score'] += fireEvent('score.calculate', new_result, self.movie, single = True)
self.found(new_result)
self.result_ids.append(result['id'])
super(ResultList, self).append(new_result)
def fillResult(self, result):
defaults = {
'id': 0,
'type': self.provider.type,
'provider': self.provider.getName(),
'download': self.provider.download,
'url': '',
'name': '',
'age': 0,
'size': 0,
'description': '',
'score': 0
}
return mergeDicts(defaults, result)
def found(self, new_result):
if not new_result.get('provider_extra'):
new_result['provider_extra'] = ''
else:
new_result['provider_extra'] = ', %s' % new_result['provider_extra']
log.info('Found: score(%(score)s) on %(provider)s%(provider_extra)s: %(name)s', new_result)

View File

@@ -31,6 +31,14 @@ config = [{
'advanced': True,
'description': '<strong>%s</strong> is the rootname of the movie. For example "/path/to/movie cd1.mkv" will be "/path/to/movie"'
},
{
'name': 'meta_url_only',
'label': 'Only IMDB URL',
'default': False,
'advanced': True,
'description': 'Create a nfo with only the IMDB url inside',
'type': 'bool',
},
{
'name': 'meta_fanart',
'label': 'Fanart',

View File

@@ -28,6 +28,11 @@ class XBMC(MetaDataBase):
return os.path.join(root, basename.replace('%s', name))
def getNfo(self, movie_info = {}, data = {}):
# return imdb url only
if self.conf('meta_url_only'):
return 'http://www.imdb.com/title/%s/' % toUnicode(data['library']['identifier'])
nfoxml = Element('movie')
# Title

View File

@@ -1,6 +0,0 @@
from .main import IMDBAPI
def start():
return IMDBAPI()
config = []

View File

@@ -0,0 +1,6 @@
from .main import OMDBAPI
def start():
return OMDBAPI()
config = []

View File

@@ -10,11 +10,11 @@ import traceback
log = CPLog(__name__)
class IMDBAPI(MovieProvider):
class OMDBAPI(MovieProvider):
urls = {
'search': 'http://www.imdbapi.com/?%s',
'info': 'http://www.imdbapi.com/?i=%s',
'search': 'http://www.omdbapi.com/?%s',
'info': 'http://www.omdbapi.com/?i=%s',
}
http_time_between_calls = 0
@@ -32,7 +32,7 @@ class IMDBAPI(MovieProvider):
'name': q
}
cache_key = 'imdbapi.cache.%s' % q
cache_key = 'omdbapi.cache.%s' % q
cached = self.getCache(cache_key, self.urls['search'] % tryUrlencode({'t': name_year.get('name'), 'y': name_year.get('year', '')}), timeout = 3)
if cached:
@@ -50,7 +50,7 @@ class IMDBAPI(MovieProvider):
if not identifier:
return {}
cache_key = 'imdbapi.cache.%s' % identifier
cache_key = 'omdbapi.cache.%s' % identifier
cached = self.getCache(cache_key, self.urls['info'] % identifier, timeout = 3)
if cached:

View File

@@ -3,6 +3,7 @@ from couchpotato.core.helpers.encoding import simplifyString, toUnicode
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.movie.base import MovieProvider
from libs.themoviedb import tmdb
import traceback
log = CPLog(__name__)
@@ -61,7 +62,12 @@ class TheMovieDb(MovieProvider):
if not results:
log.debug('Searching for movie: %s', q)
raw = tmdb.search(search_string)
raw = None
try:
raw = tmdb.search(search_string)
except:
log.error('Failed searching TMDB for "%s": %s', (search_string, traceback.format_exc()))
results = []
if raw:

View File

@@ -1,21 +1,20 @@
from .main import Mysterbin
from .main import BinSearch
def start():
return Mysterbin()
return BinSearch()
config = [{
'name': 'mysterbin',
'name': 'binsearch',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'Mysterbin',
'description': 'Free provider, less accurate. See <a href="https://www.mysterbin.com/">Mysterbin</a>',
'name': 'binsearch',
'description': 'Free provider, less accurate. See <a href="https://www.binsearch.info/">BinSearch</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
],
},

View File

@@ -0,0 +1,99 @@
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
import re
import traceback
log = CPLog(__name__)
class BinSearch(NZBProvider):
urls = {
'download': 'https://www.binsearch.info/fcgi/nzb.fcgi?q=%s',
'detail': 'https://www.binsearch.info%s',
'search': 'https://www.binsearch.info/index.php?%s',
}
http_time_between_calls = 4 # Seconds
def _search(self, movie, quality, results):
q = '%s %s' % (movie['library']['identifier'], quality.get('identifier'))
arguments = tryUrlencode({
'q': q,
'm': 'n',
'max': 250,
'adv_age': Env.setting('retention', 'nzb'),
'adv_sort': 'date',
'adv_col': 'on',
'adv_nfo': 'on',
'minsize': quality.get('size_min'),
'maxsize': quality.get('size_max'),
})
data = self.getHTMLData(self.urls['search'] % arguments)
if data:
try:
html = BeautifulSoup(data)
main_table = html.find('table', attrs = {'id':'r2'})
if not main_table:
return
items = main_table.find_all('tr')
for row in items:
title = row.find('span', attrs = {'class':'s'})
if not title: continue
nzb_id = row.find('input', attrs = {'type':'checkbox'})['name']
info = row.find('span', attrs = {'class':'d'})
size_match = re.search('size:.(?P<size>[0-9\.]+.[GMB]+)', info.text)
def extra_check(item):
parts = re.search('available:.(?P<parts>\d+)./.(?P<total>\d+)', info.text)
total = tryInt(parts.group('total'))
parts = tryInt(parts.group('parts'))
if (total / parts) < 0.95 or ((total / parts) >= 0.95 and not 'par2' in info.text.lower()):
log.info2('Wrong: \'%s\', not complete: %s out of %s', (item['name'], parts, total))
return False
if 'requires password' in info.text.lower():
log.info2('Wrong: \'%s\', passworded', (item['name']))
return False
return True
results.append({
'id': nzb_id,
'name': title.text,
'age': tryInt(re.search('(?P<size>\d+d)', row.find_all('td')[-1:][0].text).group('size')[:-1]),
'size': self.parseSize(size_match.group('size')),
'url': self.urls['download'] % nzb_id,
'detail_url': self.urls['detail'] % info.find('a')['href'],
'extra_check': extra_check
})
except:
log.error('Failed to parse HTML response from BinSearch: %s', traceback.format_exc())
def download(self, url = '', nzb_id = ''):
params = {'action': 'nzb'}
params[nzb_id] = 'on'
try:
return self.urlopen(url, params = params, show_error = False)
except:
log.error('Failed getting nzb from %s: %s', (self.getName(), traceback.format_exc()))
return 'try_next'

View File

@@ -1,17 +1,16 @@
from .main import Newzbin
from .main import FTDWorld
def start():
return Newzbin()
return FTDWorld()
config = [{
'name': 'newzbin',
'name': 'ftdworld',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'newzbin',
'description': 'See <a href="https://www.newzbin2.es/">Newzbin</a>',
'wizard': True,
'name': 'FTDWorld',
'description': 'Free provider, less accurate. See <a href="http://ftdworld.net">FTDWorld</a>',
'options': [
{
'name': 'enabled',

View File

@@ -0,0 +1,76 @@
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
import traceback
log = CPLog(__name__)
class FTDWorld(NZBProvider):
urls = {
'search': 'http://ftdworld.net/api/index.php?%s',
'detail': 'http://ftdworld.net/spotinfo.php?id=%s',
'download': 'http://ftdworld.net/cgi-bin/nzbdown.pl?fileID=%s',
'login': 'http://ftdworld.net/index.php',
}
http_time_between_calls = 3 #seconds
cat_ids = [
([4, 11], ['dvdr']),
([1], ['cam', 'ts', 'dvdrip', 'tc', 'r5', 'scr', 'brrip']),
([7, 10, 13, 14], ['bd50', '720p', '1080p']),
]
cat_backup_id = 1
def _searchOnTitle(self, title, movie, quality, results):
q = '"%s" %s' % (title, movie['library']['year'])
params = tryUrlencode({
'ctitle': q,
'customQuery': 'usr',
'cage': Env.setting('retention', 'nzb'),
'csizemin': quality.get('size_min'),
'csizemax': quality.get('size_max'),
'ccategory': 14,
'ctype': ','.join([str(x) for x in self.getCatId(quality['identifier'])]),
})
data = self.getJsonData(self.urls['search'] % params, opener = self.login_opener)
if data:
try:
if data.get('numRes') == 0:
return
for item in data.get('data'):
nzb_id = tryInt(item.get('id'))
results.append({
'id': nzb_id,
'name': toUnicode(item.get('Title')),
'age': self.calculateAge(tryInt(item.get('Created'))),
'url': self.urls['download'] % nzb_id,
'download': self.loginDownload,
'detail_url': self.urls['detail'] % nzb_id,
'score': (tryInt(item.get('webPlus', 0)) - tryInt(item.get('webMin', 0))) * 3,
})
except:
log.error('Failed to parse HTML response from FTDWorld: %s', traceback.format_exc())
def getLoginParams(self):
return tryUrlencode({
'userlogin': self.conf('username'),
'passlogin': self.conf('password'),
'submit': 'Log In',
})
def loginSuccess(self, output):
return 'password is incorrect' not in output

View File

@@ -1,102 +0,0 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode, \
simplifyString
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
log = CPLog(__name__)
class Mysterbin(NZBProvider):
urls = {
'search': 'https://www.mysterbin.com/advsearch?%s',
'download': 'https://www.mysterbin.com/nzb?c=%s',
'nfo': 'https://www.mysterbin.com/nfo?c=%s',
}
http_time_between_calls = 1 #seconds
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
q = '"%s" %s %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'], quality.get('identifier'))
for ignored in Env.setting('ignored_words', 'searcher').split(','):
if len(q) + len(ignored.strip()) > 126:
break
q = '%s -%s' % (q, ignored.strip())
params = {
'q': q,
'match': 'normal',
'minSize': quality.get('size_min'),
'maxSize': quality.get('size_max'),
'complete': 2,
'maxAge': Env.setting('retention', 'nzb'),
'nopasswd': 'on',
}
cache_key = 'mysterbin.%s.%s.%s' % (movie['library']['identifier'], quality.get('identifier'), q)
data = self.getCache(cache_key, self.urls['search'] % tryUrlencode(params))
if data:
try:
html = BeautifulSoup(data)
resultable = html.find('table', attrs = {'class':'t'})
for result in resultable.find_all('tr'):
try:
myster_id = result.find('input', attrs = {'class': 'check4nzb'})['value']
# Age
age = ''
for temp in result.find('td', attrs = {'class': 'cdetail'}).find_all(text = True):
if 'days' in temp:
age = tryInt(temp.split(' ')[0])
break
# size
size = None
for temp in result.find('div', attrs = {'class': 'cdetail'}).find_all(text = True):
if 'gb' in temp.lower() or 'mb' in temp.lower() or 'kb' in temp.lower():
size = self.parseSize(temp)
break
description = ''
if result.find('a', text = 'View NFO'):
description = toUnicode(self.getCache('mysterbin.%s' % myster_id, self.urls['nfo'] % myster_id, cache_timeout = 25920000))
new = {
'id': myster_id,
'name': ''.join(result.find('span', attrs = {'class': 'cname'}).find_all(text = True)),
'type': 'nzb',
'provider': self.getName(),
'age': age,
'size': size,
'url': self.urls['download'] % myster_id,
'description': description,
'download': self.download,
'check_nzb': False,
}
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
except:
pass
return results
except AttributeError:
log.debug('No search results found.')
return results

View File

@@ -1,156 +0,0 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from dateutil.parser import parse
import base64
import time
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
class Newzbin(NZBProvider, RSS):
urls = {
'download': 'https://www.newzbin2.es/api/dnzb/',
'search': 'https://www.newzbin2.es/search/',
}
format_ids = {
2: ['scr'],
1: ['cam'],
4: ['tc'],
8: ['ts'],
1024: ['r5'],
}
cat_ids = [
([262144], ['bd50']),
([2097152], ['1080p']),
([524288], ['720p']),
([262144], ['brrip']),
([2], ['dvdr']),
]
cat_backup_id = -1
http_time_between_calls = 3 # Seconds
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
format_id = self.getFormatId(type)
cat_id = self.getCatId(type)
arguments = tryUrlencode({
'searchaction': 'Search',
'u_url_posts_only': '0',
'u_show_passworded': '0',
'q_url': 'imdb.com/title/' + movie['library']['identifier'],
'sort': 'ps_totalsize',
'order': 'asc',
'u_post_results_amt': '100',
'feed': 'rss',
'category': '6',
'ps_rb_video_format': str(cat_id),
'ps_rb_source': str(format_id),
'u_post_larger_than': quality.get('size_min'),
'u_post_smaller_than': quality.get('size_max'),
})
url = "%s?%s" % (self.urls['search'], arguments)
cache_key = str('newzbin.%s.%s.%s' % (movie['library']['identifier'], str(format_id), str(cat_id)))
data = self.getCache(cache_key)
if not data:
headers = {
'Authorization': "Basic %s" % base64.encodestring('%s:%s' % (self.conf('username'), self.conf('password')))[:-1]
}
try:
data = self.urlopen(url, headers = headers)
self.setCache(cache_key, data)
except:
return results
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
for nzb in nzbs:
title = self.getTextElement(nzb, "title")
if 'error' in title.lower(): continue
REPORT_NS = 'http://www.newzbin2.es/DTD/2007/feeds/report/';
# Add attributes to name
try:
for attr in nzb.find('{%s}attributes' % REPORT_NS):
title += ' ' + attr.text
except:
pass
id = int(self.getTextElement(nzb, '{%s}id' % REPORT_NS))
size = str(int(self.getTextElement(nzb, '{%s}size' % REPORT_NS)) / 1024 / 1024) + ' mb'
date = str(self.getTextElement(nzb, '{%s}postdate' % REPORT_NS))
new = {
'id': id,
'type': 'nzb',
'provider': self.getName(),
'name': title,
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': self.parseSize(size),
'url': str(self.getTextElement(nzb, '{%s}nzb' % REPORT_NS)),
'download': self.download,
'detail_url': str(self.getTextElement(nzb, 'link')),
'description': self.getTextElement(nzb, "description"),
'check_nzb': False,
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from newzbin')
return results
def download(self, url = '', nzb_id = ''):
try:
log.info('Download nzb from newzbin, report id: %s ', nzb_id)
return self.urlopen(self.urls['download'], params = {
'username' : self.conf('username'),
'password' : self.conf('password'),
'reportid' : nzb_id
}, show_error = False)
except Exception, e:
log.error('Failed downloading from newzbin, check credit: %s', e)
return False
def getFormatId(self, format):
for id, quality in self.format_ids.iteritems():
for q in quality:
if q == format:
return id
return self.cat_backup_id
def isEnabled(self):
return NZBProvider.isEnabled(self) and self.conf('enabled') and self.conf('username') and self.conf('password')

View File

@@ -11,7 +11,9 @@ config = [{
'subtab': 'nzb_providers',
'name': 'newznab',
'order': 10,
'description': 'Enable multiple NewzNab providers such as <a href="https://nzb.su" target="_blank">NZB.su</a> and <a href="https://nzbs.org" target="_blank">nzbs.org</a>',
'description': 'Enable <a href="http://newznab.com/" target="_blank">NewzNab providers</a> such as <a href="https://nzb.su" target="_blank">NZB.su</a>, \
<a href="https://nzbs.org" target="_blank">NZBs.org</a>, <a href="http://dognzb.cr/" target="_blank">DOGnzb.cr</a>, \
<a href="https://github.com/spotweb/spotweb" target="_blank">Spotweb</a> or <a href="https://nzbgeek.info/" target="_blank">NZBGeek</a>',
'wizard': True,
'options': [
{
@@ -20,16 +22,16 @@ config = [{
},
{
'name': 'use',
'default': '0,0,0'
'default': '0,0,0,0'
},
{
'name': 'host',
'default': 'nzb.su,dognzb.cr,nzbs.org',
'default': 'nzb.su,dognzb.cr,nzbs.org,https://index.nzbgeek.info',
'description': 'The hostname of your newznab provider',
},
{
'name': 'api_key',
'default': ',,',
'default': ',,,',
'label': 'Api Key',
'description': 'Can be found on your profile page',
'type': 'combined',

View File

@@ -1,8 +1,8 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import cleanHost, splitString
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.base import ResultList
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
@@ -10,7 +10,6 @@ from urllib2 import HTTPError
from urlparse import urlparse
import time
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -25,139 +24,59 @@ class Newznab(NZBProvider, RSS):
limits_reached = {}
cat_ids = [
([2010], ['dvdr']),
([2030], ['cam', 'ts', 'dvdrip', 'tc', 'r5', 'scr']),
([2040], ['720p', '1080p']),
([2050], ['bd50']),
]
cat_backup_id = 2000
http_time_between_calls = 1 # Seconds
def feed(self):
hosts = self.getHosts()
results = []
for host in hosts:
result = self.singleFeed(host)
if result:
results.extend(result)
return results
def singleFeed(self, host):
results = []
if self.isDisabled(host):
return results
arguments = tryUrlencode({
't': self.cat_backup_id,
'r': host['api_key'],
'i': 58,
})
url = "%s?%s" % (cleanHost(host['host']) + 'rss', arguments)
cache_key = 'newznab.%s.feed.%s' % (host['host'], arguments)
results = self.createItems(url, cache_key, host, for_feed = True)
return results
def search(self, movie, quality):
hosts = self.getHosts()
results = []
for host in hosts:
result = self.singleSearch(host, movie, quality)
results = ResultList(self, movie, quality, imdb_results = True)
if result:
results.extend(result)
for host in hosts:
if self.isDisabled(host):
continue
self._searchOnHost(host, movie, quality, results)
return results
def singleSearch(self, host, movie, quality):
def _searchOnHost(self, host, movie, quality, results):
results = []
if self.isDisabled(host):
return results
cat_id = self.getCatId(quality['identifier'])
arguments = tryUrlencode({
'imdbid': movie['library']['identifier'].replace('tt', ''),
'cat': cat_id[0],
'apikey': host['api_key'],
'extended': 1
})
url = "%s&%s" % (self.getUrl(host['host'], self.urls['search']), arguments)
url = '%s&%s' % (self.getUrl(host['host'], self.urls['search']), arguments)
cache_key = 'newznab.%s.%s.%s' % (host['host'], movie['library']['identifier'], cat_id[0])
nzbs = self.getRSSData(url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
results = self.createItems(url, cache_key, host, movie = movie, quality = quality)
for nzb in nzbs:
return results
date = None
for item in nzb:
if item.attrib.get('name') == 'usenetdate':
date = item.attrib.get('value')
break
def createItems(self, url, cache_key, host, movie = None, quality = None, for_feed = False):
results = []
if not date:
date = self.getTextElement(nzb, 'pubDate')
data = self.getCache(cache_key, url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
nzb_id = self.getTextElement(nzb, 'guid').split('/')[-1:].pop()
name = self.getTextElement(nzb, 'title')
results = []
for nzb in nzbs:
if not name:
continue
date = ''
size = 0
for item in nzb:
if item.attrib.get('name') == 'size':
size = item.attrib.get('value')
elif item.attrib.get('name') == 'usenetdate':
date = item.attrib.get('value')
if date is '': log.debug('Date not parsed properly or not available for %s: %s', (host['host'], self.getTextElement(nzb, "title")))
if size is 0: log.debug('Size not parsed properly or not available for %s: %s', (host['host'], self.getTextElement(nzb, "title")))
id = self.getTextElement(nzb, "guid").split('/')[-1:].pop()
new = {
'id': id,
'provider': self.getName(),
'type': 'nzb',
'name': self.getTextElement(nzb, "title"),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': int(size) / 1024 / 1024,
'url': (self.getUrl(host['host'], self.urls['download']) % id) + self.getApiExt(host),
'download': self.download,
'detail_url': '%sdetails/%s' % (cleanHost(host['host']), id),
'content': self.getTextElement(nzb, "description"),
}
if not for_feed:
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
else:
results.append(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from Newznab: %s', host)
return results
results.append({
'id': nzb_id,
'provider_extra': host['host'],
'name': self.getTextElement(nzb, 'title'),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': int(self.getElement(nzb, 'enclosure').attrib['length']) / 1024 / 1024,
'url': (self.getUrl(host['host'], self.urls['download']) % tryUrlencode(nzb_id)) + self.getApiExt(host),
'detail_url': '%sdetails/%s' % (cleanHost(host['host']), tryUrlencode(nzb_id)),
'content': self.getTextElement(nzb, 'description'),
})
def getHosts(self):
@@ -217,5 +136,6 @@ class Newznab(NZBProvider, RSS):
self.limits_reached[host] = time.time()
return 'try_next'
log.error('Failed download from %s', (host, traceback.format_exc()))
raise
log.error('Failed download from %s: %s', (host, traceback.format_exc()))
return 'try_next'

View File

@@ -15,7 +15,6 @@ config = [{
{
'name': 'enabled',
'type': 'enabler',
'default': True,
},
],
},

View File

@@ -1,15 +1,11 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode, \
simplifyString
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
import time
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -17,87 +13,53 @@ log = CPLog(__name__)
class NZBClub(NZBProvider, RSS):
urls = {
'search': 'https://www.nzbclub.com/nzbfeed.aspx?%s',
'search': 'http://www.nzbclub.com/nzbfeed.aspx?%s',
}
http_time_between_calls = 4 #seconds
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
q = '"%s %s" %s' % (title, movie['library']['year'], quality.get('identifier'))
q = '"%s %s" %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'], quality.get('identifier'))
for ignored in Env.setting('ignored_words', 'searcher').split(','):
q = '%s -%s' % (q, ignored.strip())
params = {
params = tryUrlencode({
'q': q,
'ig': '1',
'rpp': 200,
'st': 1,
'sp': 1,
'ns': 1,
}
})
cache_key = 'nzbclub.%s.%s.%s' % (movie['library']['identifier'], quality.get('identifier'), q)
data = self.getCache(cache_key, self.urls['search'] % tryUrlencode(params))
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
nzbs = self.getRSSData(self.urls['search'] % params)
for nzb in nzbs:
for nzb in nzbs:
nzbclub_id = tryInt(self.getTextElement(nzb, "link").split('/nzb_view/')[1].split('/')[0])
enclosure = self.getElement(nzb, "enclosure").attrib
size = enclosure['length']
date = self.getTextElement(nzb, "pubDate")
nzbclub_id = tryInt(self.getTextElement(nzb, "link").split('/nzb_view/')[1].split('/')[0])
enclosure = self.getElement(nzb, "enclosure").attrib
size = enclosure['length']
date = self.getTextElement(nzb, "pubDate")
def extra_check(item):
full_description = self.getCache('nzbclub.%s' % nzbclub_id, item['detail_url'], cache_timeout = 25920000)
def extra_check(item):
full_description = self.getCache('nzbclub.%s' % nzbclub_id, item['detail_url'], cache_timeout = 25920000)
for ignored in ['ARCHIVE inside ARCHIVE', 'Incomplete', 'repair impossible']:
if ignored in full_description:
log.info('Wrong: Seems to be passworded or corrupted files: %s', new['name'])
return False
for ignored in ['ARCHIVE inside ARCHIVE', 'Incomplete', 'repair impossible']:
if ignored in full_description:
log.info('Wrong: Seems to be passworded or corrupted files: %s', item['name'])
return False
return True
return True
new = {
'id': nzbclub_id,
'type': 'nzb',
'provider': self.getName(),
'name': toUnicode(self.getTextElement(nzb, "title")),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': tryInt(size) / 1024 / 1024,
'url': enclosure['url'].replace(' ', '_'),
'download': self.download,
'detail_url': self.getTextElement(nzb, "link"),
'description': '',
'get_more_info': self.getMoreInfo,
'extra_check': extra_check
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from NZBClub')
return results
results.append({
'id': nzbclub_id,
'name': toUnicode(self.getTextElement(nzb, "title")),
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': tryInt(size) / 1024 / 1024,
'url': enclosure['url'].replace(' ', '_'),
'detail_url': self.getTextElement(nzb, "link"),
'get_more_info': self.getMoreInfo,
'extra_check': extra_check
})
def getMoreInfo(self, item):
full_description = self.getCache('nzbclub.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)

View File

@@ -15,6 +15,7 @@ config = [{
{
'name': 'enabled',
'type': 'enabler',
'default': True,
},
],
},

View File

@@ -1,17 +1,13 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode, \
simplifyString
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
import re
import time
import traceback
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -20,18 +16,14 @@ class NzbIndex(NZBProvider, RSS):
urls = {
'download': 'https://www.nzbindex.com/download/',
'api': 'https://www.nzbindex.com/rss/',
'search': 'https://www.nzbindex.com/rss/?%s',
}
http_time_between_calls = 1 # Seconds
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
q = '"%s %s" %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'], quality.get('identifier'))
q = '"%s" %s %s' % (title, movie['library']['year'], quality.get('identifier'))
arguments = tryUrlencode({
'q': q,
'age': Env.setting('retention', 'nzb'),
@@ -43,68 +35,37 @@ class NzbIndex(NZBProvider, RSS):
'more': 1,
'complete': 1,
})
url = "%s?%s" % (self.urls['api'], arguments)
cache_key = 'nzbindex.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
nzbs = self.getRSSData(self.urls['search'] % arguments)
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
nzbindex_id = int(self.getTextElement(nzb, "link").split('/')[4])
data = self.getCache(cache_key, url)
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
nzbindex_id = int(self.getTextElement(nzb, "link").split('/')[4])
try:
description = self.getTextElement(nzb, "description")
except:
description = ''
def extra_check(new):
if '#c20000' in new['description'].lower():
log.info('Wrong: Seems to be passworded: %s', new['name'])
return False
return True
new = {
'id': nzbindex_id,
'type': 'nzb',
'provider': self.getName(),
'download': self.download,
'name': self.getTextElement(nzb, "title"),
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, "pubDate")).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': enclosure['url'].replace('/download/', '/release/'),
'description': description,
'get_more_info': self.getMoreInfo,
'extra_check': extra_check,
'check_nzb': True,
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
description = self.getTextElement(nzb, "description")
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
description = ''
return results
def extra_check(item):
if '#c20000' in item['description'].lower():
log.info('Wrong: Seems to be passworded: %s', item['name'])
return False
return True
results.append({
'id': nzbindex_id,
'name': self.getTextElement(nzb, "title"),
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, "pubDate")).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': enclosure['url'].replace('/download/', '/release/'),
'description': description,
'get_more_info': self.getMoreInfo,
'extra_check': extra_check,
})
def getMoreInfo(self, item):
try:
@@ -116,5 +77,3 @@ class NzbIndex(NZBProvider, RSS):
except:
pass
def isEnabled(self):
return NZBProvider.isEnabled(self) and self.conf('enabled')

View File

@@ -1,39 +0,0 @@
from .main import NZBMatrix
def start():
return NZBMatrix()
config = [{
'name': 'nzbmatrix',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'nzbmatrix',
'label': 'NZBMatrix',
'description': 'See <a href="https://nzbmatrix.com/">NZBMatrix</a>',
'wizard': True,
'options': [
{
'name': 'enabled',
'type': 'enabler',
},
{
'name': 'username',
},
{
'name': 'api_key',
'default': '',
'label': 'Api Key',
},
{
'name': 'english_only',
'default': 1,
'type': 'bool',
'label': 'English only',
'description': 'Only search for English spoken movies on NZBMatrix',
},
],
},
],
}]

View File

@@ -1,108 +0,0 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
import time
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
class NZBMatrix(NZBProvider, RSS):
urls = {
'download': 'https://api.nzbmatrix.com/v1.1/download.php?id=%s',
'detail': 'https://nzbmatrix.com/nzb-details.php?id=%s&hit=1',
'search': 'https://rss.nzbmatrix.com/rss.php',
}
cat_ids = [
([50], ['bd50']),
([42, 53], ['720p', '1080p']),
([2, 9], ['cam', 'ts', 'dvdrip', 'tc', 'r5', 'scr']),
([54], ['brrip']),
([1], ['dvdr']),
]
cat_backup_id = 2
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
cat_ids = ','.join(['%s' % x for x in self.getCatId(quality.get('identifier'))])
arguments = tryUrlencode({
'term': movie['library']['identifier'],
'subcat': cat_ids,
'username': self.conf('username'),
'apikey': self.conf('api_key'),
'searchin': 'weblink',
'maxage': Env.setting('retention', section = 'nzb'),
'english': self.conf('english_only'),
})
url = "%s?%s" % (self.urls['search'], arguments)
cache_key = 'nzbmatrix.%s.%s' % (movie['library'].get('identifier'), cat_ids)
data = self.getCache(cache_key, url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'channel/item')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
for nzb in nzbs:
title = self.getTextElement(nzb, "title")
if 'error' in title.lower(): continue
id = int(self.getTextElement(nzb, "link").split('&')[0].partition('id=')[2])
size = self.getTextElement(nzb, "description").split('<br /><b>')[2].split('> ')[1]
date = str(self.getTextElement(nzb, "description").split('<br /><b>')[3].partition('Added:</b> ')[2])
new = {
'id': id,
'type': 'nzb',
'provider': self.getName(),
'name': title,
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': self.parseSize(size),
'url': self.urls['download'] % id + self.getApiExt(),
'download': self.download,
'detail_url': self.urls['detail'] % id,
'description': self.getTextElement(nzb, "description"),
'check_nzb': True,
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from NZBMatrix.com')
return results
def download(self, url = '', nzb_id = ''):
return self.urlopen(url, headers = {'User-Agent': Env.getIdentifier()})
def getApiExt(self):
return '&username=%s&apikey=%s' % (self.conf('username'), self.conf('api_key'))
def isEnabled(self):
return NZBProvider.isEnabled(self) and self.conf('username') and self.conf('api_key')

View File

@@ -1,11 +1,9 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
import time
import xml.etree.ElementTree as XMLTree
log = CPLog(__name__)
@@ -23,15 +21,9 @@ class Nzbsrus(NZBProvider, RSS):
]
cat_backup_id = 240
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
def _search(self, movie, quality, results):
cat_id_string = '&'.join(['c%s=1' % x for x in self.getCatId(quality.get('identifier'))])
arguments = tryUrlencode({
'searchtext': 'imdb:' + movie['library']['identifier'][2:],
'uid': self.conf('userid'),
@@ -42,63 +34,29 @@ class Nzbsrus(NZBProvider, RSS):
# check for english_only
if self.conf('english_only'):
arguments += "&lang0=1&lang3=1&lang1=1"
arguments += '&lang0=1&lang3=1&lang1=1'
url = "%s&%s&%s" % (self.urls['search'], arguments , cat_id_string)
url = '%s&%s&%s' % (self.urls['search'], arguments , cat_id_string)
nzbs = self.getRSSData(url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
cache_key = 'nzbsrus_1.%s.%s' % (movie['library'].get('identifier'), cat_id_string)
single_cat = True
for nzb in nzbs:
data = self.getCache(cache_key, url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
if data:
try:
try:
data = XMLTree.fromstring(data)
nzbs = self.getElements(data, 'results/result')
except Exception, e:
log.debug('%s, %s', (self.getName(), e))
return results
title = self.getTextElement(nzb, 'name')
if 'error' in title.lower(): continue
for nzb in nzbs:
nzb_id = self.getTextElement(nzb, 'id')
size = int(round(int(self.getTextElement(nzb, 'size')) / 1048576))
age = int(round((time.time() - int(self.getTextElement(nzb, 'postdate'))) / 86400))
title = self.getTextElement(nzb, "name")
if 'error' in title.lower(): continue
id = self.getTextElement(nzb, "id")
size = int(round(int(self.getTextElement(nzb, "size")) / 1048576))
age = int(round((time.time() - int(self.getTextElement(nzb, "postdate"))) / 86400))
new = {
'id': id,
'type': 'nzb',
'provider': self.getName(),
'name': title,
'age': age,
'size': size,
'url': self.urls['download'] % id + self.getApiExt() + self.getTextElement(nzb, "key"),
'download': self.download,
'detail_url': self.urls['detail'] % id,
'description': self.getTextElement(nzb, "addtext"),
'check_nzb': True,
}
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
new['score'] = fireEvent('score.calculate', new, movie, single = True)
results.append(new)
self.found(new)
return results
except SyntaxError:
log.error('Failed to parse XML response from Nzbsrus.com')
return results
def download(self, url = '', nzb_id = ''):
return self.urlopen(url, headers = {'User-Agent': Env.getIdentifier()})
results.append({
'id': nzb_id,
'name': title,
'age': age,
'size': size,
'url': self.urls['download'] % id + self.getApiExt() + self.getTextElement(nzb, 'key'),
'detail_url': self.urls['detail'] % nzb_id,
'description': self.getTextElement(nzb, 'addtext'),
})
def getApiExt(self):
return '/%s/' % (self.conf('userid'))

View File

@@ -0,0 +1,23 @@
from .main import Nzbx
def start():
return Nzbx()
config = [{
'name': 'nzbx',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'nzbX',
'description': 'Free provider. See <a href="https://www.nzbx.co/">nzbX</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': True,
},
],
},
],
}]

View File

@@ -0,0 +1,38 @@
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
log = CPLog(__name__)
class Nzbx(NZBProvider):
urls = {
'search': 'https://nzbx.co/api/search?%s',
'details': 'https://nzbx.co/api/details?guid=%s',
}
http_time_between_calls = 1 # Seconds
def _search(self, movie, quality, results):
# Get nbzs
arguments = tryUrlencode({
'q': movie['library']['identifier'].replace('tt', ''),
'sf': quality.get('size_min'),
})
nzbs = self.getJsonData(self.urls['search'] % arguments, headers = {'User-Agent': Env.getIdentifier()})
for nzb in nzbs:
results.append({
'id': nzb['guid'],
'url': nzb['nzb'],
'detail_url': self.urls['details'] % nzb['guid'],
'name': nzb['name'],
'age': self.calculateAge(int(nzb['postdate'])),
'size': tryInt(nzb['size']) / 1024 / 1024,
'score': 5 if nzb['votes']['upvotes'] > nzb['votes']['downvotes'] else 0
})

View File

@@ -0,0 +1,31 @@
from .main import OMGWTFNZBs
def start():
return OMGWTFNZBs()
config = [{
'name': 'omgwtfnzbs',
'groups': [
{
'tab': 'searcher',
'subtab': 'nzb_providers',
'name': 'OMGWTFNZBs',
'description': 'See <a href="http://www.omgwtfnzbs.com/">OMGWTFNZBs</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
},
{
'name': 'username',
'default': '',
},
{
'name': 'api_key',
'label': 'Api Key',
'default': '',
},
],
},
],
}]

View File

@@ -0,0 +1,61 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from dateutil.parser import parse
from urlparse import urlparse, parse_qs
import time
log = CPLog(__name__)
class OMGWTFNZBs(NZBProvider, RSS):
urls = {
'search': 'http://rss.omgwtfnzbs.org/rss-search.php?%s',
}
http_time_between_calls = 1 #seconds
cat_ids = [
([15], ['dvdrip']),
([15, 16], ['brrip']),
([16], ['720p', '1080p', 'bd50']),
([17], ['dvdr']),
]
cat_backup_id = 'movie'
def search(self, movie, quality):
if quality['identifier'] in fireEvent('quality.pre_releases', single = True):
return []
return super(OMGWTFNZBs, self).search(movie, quality)
def _searchOnTitle(self, title, movie, quality, results):
q = '%s %s' % (title, movie['library']['year'])
params = tryUrlencode({
'search': q,
'catid': ','.join([str(x) for x in self.getCatId(quality['identifier'])]),
'user': self.conf('username', default = ''),
'api': self.conf('api_key', default = ''),
})
nzbs = self.getRSSData(self.urls['search'] % params)
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
results.append({
'id': parse_qs(urlparse(self.getTextElement(nzb, 'link')).query).get('id')[0],
'name': toUnicode(self.getTextElement(nzb, 'title')),
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, 'pubDate')).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': self.getTextElement(nzb, 'link'),
'description': self.getTextElement(nzb, 'description')
})

View File

@@ -1,9 +1,6 @@
from couchpotato.core.helpers.variable import getImdb, md5
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.base import YarrProvider
import cookielib
import traceback
import urllib2
log = CPLog(__name__)
@@ -11,7 +8,6 @@ log = CPLog(__name__)
class TorrentProvider(YarrProvider):
type = 'torrent'
login_opener = None
def imdbMatch(self, url, imdbId):
if getImdb(url) == imdbId:
@@ -29,29 +25,8 @@ class TorrentProvider(YarrProvider):
return False
def login(self):
class TorrentMagnetProvider(TorrentProvider):
try:
cookiejar = cookielib.CookieJar()
opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookiejar))
urllib2.install_opener(opener)
f = opener.open(self.urls['login'], self.getLoginParams())
f.read()
f.close()
self.login_opener = opener
return True
except:
log.error('Failed to login %s: %s', (self.getName(), traceback.format_exc()))
type = 'torrent_magnet'
return False
def loginDownload(self, url = '', nzb_id = ''):
try:
if not self.login_opener and not self.login():
log.error('Failed downloading from %s', self.getName())
return self.urlopen(url, opener = self.login_opener)
except:
log.error('Failed downloading from %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return ''
download = None

View File

@@ -16,7 +16,7 @@ config = [{
{
'name': 'enabled',
'type': 'enabler',
'default': False,
'default': True,
},
],
},

View File

@@ -1,16 +1,14 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import simplifyString
from couchpotato.core.helpers.variable import tryInt, getTitle
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
from couchpotato.core.providers.torrent.base import TorrentMagnetProvider
import re
import traceback
log = CPLog(__name__)
class KickAssTorrents(TorrentProvider):
class KickAssTorrents(TorrentMagnetProvider):
urls = {
'test': 'https://kat.ph/',
@@ -30,16 +28,10 @@ class KickAssTorrents(TorrentProvider):
http_time_between_calls = 1 #seconds
cat_backup_id = None
def search(self, movie, quality):
def _search(self, movie, quality, results):
results = []
if self.isDisabled():
return results
data = self.getHTMLData(self.urls['search'] % ('m', movie['library']['identifier'].replace('tt', '')))
title = simplifyString(getTitle(movie['library'])).replace(' ', '-')
cache_key = 'kickasstorrents.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, self.urls['search'] % (title, movie['library']['identifier'].replace('tt', '')))
if data:
cat_ids = self.getCatId(quality['identifier'])
@@ -53,61 +45,42 @@ class KickAssTorrents(TorrentProvider):
continue
try:
for temp in result.find_all('tr'):
if temp['class'] is 'firstr' or not temp.get('id'):
continue
try:
for temp in result.find_all('tr'):
if temp['class'] is 'firstr' or not temp.get('id'):
continue
new = {}
new = {
'type': 'torrent_magnet',
'check_nzb': False,
'description': '',
'provider': self.getName(),
'score': 0,
}
nr = 0
for td in temp.find_all('td'):
column_name = table_order[nr]
if column_name:
nr = 0
for td in temp.find_all('td'):
column_name = table_order[nr]
if column_name:
if column_name is 'name':
link = td.find('div', {'class': 'torrentname'}).find_all('a')[1]
new['id'] = temp.get('id')[-8:]
new['name'] = link.text
new['url'] = td.find('a', 'imagnet')['href']
new['detail_url'] = self.urls['detail'] % link['href'][1:]
new['score'] = 20 if td.find('a', 'iverif') else 0
elif column_name is 'size':
new['size'] = self.parseSize(td.text)
elif column_name is 'age':
new['age'] = self.ageToDays(td.text)
elif column_name is 'seeds':
new['seeders'] = tryInt(td.text)
elif column_name is 'leechers':
new['leechers'] = tryInt(td.text)
if column_name is 'name':
link = td.find('div', {'class': 'torrentname'}).find_all('a')[1]
new['id'] = temp.get('id')[-8:]
new['name'] = link.text
new['url'] = td.find('a', 'imagnet')['href']
new['detail_url'] = self.urls['detail'] % link['href'][1:]
new['score'] = 20 if td.find('a', 'iverif') else 0
elif column_name is 'size':
new['size'] = self.parseSize(td.text)
elif column_name is 'age':
new['age'] = self.ageToDays(td.text)
elif column_name is 'seeds':
new['seeds'] = tryInt(td.text)
elif column_name is 'leechers':
new['leechers'] = tryInt(td.text)
nr += 1
nr += 1
new['score'] += fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie',
nzb = new, movie = movie, quality = quality,
imdb_results = True, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
except:
log.error('Failed parsing KickAssTorrents: %s', traceback.format_exc())
results.append(new)
except:
pass
log.error('Failed parsing KickAssTorrents: %s', traceback.format_exc())
return results
except AttributeError:
log.debug('No search results found.')
return results
def ageToDays(self, age_str):
age = 0
age_str = age_str.replace('&nbsp;', ' ')

View File

@@ -31,6 +31,10 @@ config = [{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'passkey',
'default': '',
}
],
}

View File

@@ -1,4 +1,3 @@
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import getTitle, tryInt, mergeDicts
from couchpotato.core.logger import CPLog
@@ -21,7 +20,7 @@ class PassThePopcorn(TorrentProvider):
'domain': 'https://tls.passthepopcorn.me',
'detail': 'https://tls.passthepopcorn.me/torrents.php?torrentid=%s',
'torrent': 'https://tls.passthepopcorn.me/torrents.php',
'login': 'https://tls.passthepopcorn.me/login.php',
'login': 'https://tls.passthepopcorn.me/ajax.php?action=login',
'search': 'https://tls.passthepopcorn.me/search/%s/0/7/%d'
}
@@ -65,18 +64,11 @@ class PassThePopcorn(TorrentProvider):
else:
raise PassThePopcorn.NotLoggedInHTTPError(req.get_full_url(), code, msg, headers, fp)
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
def _search(self, movie, quality, results):
movie_title = getTitle(movie['library'])
quality_id = quality['identifier']
log.info('Searching for %s at quality %s' % (movie_title, quality_id))
params = mergeDicts(self.quality_search_params[quality_id].copy(), {
'order_by': 'relevance',
'order_way': 'descending',
@@ -85,7 +77,7 @@ class PassThePopcorn(TorrentProvider):
# Do login for the cookies
if not self.login_opener and not self.login():
return results
return
try:
url = '%s?json=noredirect&%s' % (self.urls['torrent'], tryUrlencode(params))
@@ -93,12 +85,11 @@ class PassThePopcorn(TorrentProvider):
res = json.loads(txt)
except:
log.error('Search on PassThePopcorn.me (%s) failed (could not decode JSON)' % params)
return []
return
try:
if not 'Movies' in res:
log.info("PTP search returned nothing for '%s' at quality '%s' with search parameters %s" % (movie_title, quality_id, params))
return []
return
authkey = res['AuthKey']
passkey = res['PassKey']
@@ -118,7 +109,6 @@ class PassThePopcorn(TorrentProvider):
if 'Scene' in torrent and torrent['Scene']:
torrentdesc += ' Scene'
if 'RemasterTitle' in torrent and torrent['RemasterTitle']:
# eliminate odd characters...
torrentdesc += self.htmlToASCII(' %s' % torrent['RemasterTitle'])
torrentdesc += ' (%s)' % quality_id
@@ -127,39 +117,23 @@ class PassThePopcorn(TorrentProvider):
def extra_check(item):
return self.torrentMeetsQualitySpec(item, type)
def extra_score(item):
return 50 if torrent['GoldenPopcorn'] else 0
new = {
results.append({
'id': torrent_id,
'type': 'torrent',
'provider': self.getName(),
'name': torrent_name,
'description': '',
'url': '%s?action=download&id=%d&authkey=%s&torrent_pass=%s' % (self.urls['torrent'], torrent_id, authkey, passkey),
'detail_url': self.urls['detail'] % torrent_id,
'date': tryInt(time.mktime(parse(torrent['UploadTime']).timetuple())),
'size': tryInt(torrent['Size']) / (1024 * 1024),
'provider': self.getName(),
'seeders': tryInt(torrent['Seeders']),
'leechers': tryInt(torrent['Leechers']),
'extra_score': extra_score,
'score': 50 if torrent['GoldenPopcorn'] else 0,
'extra_check': extra_check,
'download': self.loginDownload,
}
})
new['score'] = fireEvent('score.calculate', new, movie, single = True)
if fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality):
results.append(new)
self.found(new)
return results
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def login(self):
cookieprocessor = urllib2.HTTPCookieProcessor(cookielib.CookieJar())
@@ -249,6 +223,7 @@ class PassThePopcorn(TorrentProvider):
return tryUrlencode({
'username': self.conf('username'),
'password': self.conf('password'),
'passkey': self.conf('passkey'),
'keeplogged': '1',
'login': 'Login'
})

View File

@@ -10,12 +10,12 @@ config = [{
'tab': 'searcher',
'subtab': 'torrent_providers',
'name': 'PublicHD',
'description': 'Public Torrent site with only HD content. See <a href="https://publichd.eu/">PublicHD</a>',
'description': 'Public Torrent site with only HD content. See <a href="https://publichd.se/">PublicHD</a>',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
'default': True,
},
],
},

View File

@@ -1,9 +1,8 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode, toUnicode
from couchpotato.core.helpers.variable import getTitle, tryInt
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
from couchpotato.core.providers.torrent.base import TorrentMagnetProvider
from urlparse import parse_qs
import re
import traceback
@@ -11,31 +10,31 @@ import traceback
log = CPLog(__name__)
class PublicHD(TorrentProvider):
class PublicHD(TorrentMagnetProvider):
urls = {
'test': 'https://publichd.eu',
'detail': 'https://publichd.eu/index.php?page=torrent-details&id=%s',
'search': 'https://publichd.eu/index.php',
'test': 'https://publichd.se',
'detail': 'https://publichd.se/index.php?page=torrent-details&id=%s',
'search': 'https://publichd.se/index.php',
}
http_time_between_calls = 0
def search(self, movie, quality):
results = []
if not quality.get('hd', False):
return []
if self.isDisabled() or not quality.get('hd', False):
return results
return super(PublicHD, self).search(movie, quality)
def _searchOnTitle(self, title, movie, quality, results):
params = tryUrlencode({
'page':'torrents',
'search': '%s %s' % (getTitle(movie['library']), movie['library']['year']),
'search': '%s %s' % (title, movie['library']['year']),
'active': 1,
})
url = '%s?%s' % (self.urls['search'], params)
cache_key = 'publichd.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, url)
data = self.getHTMLData('%s?%s' % (self.urls['search'], params))
if data:
@@ -53,36 +52,20 @@ class PublicHD(TorrentProvider):
url = parse_qs(info_url['href'])
new = {
results.append({
'id': url['id'][0],
'name': info_url.string,
'type': 'torrent_magnet',
'check_nzb': False,
'description': '',
'provider': self.getName(),
'url': download['href'],
'detail_url': self.urls['detail'] % url['id'][0],
'size': self.parseSize(result.find_all('td')[7].string),
'seeders': tryInt(result.find_all('td')[4].string),
'leechers': tryInt(result.find_all('td')[5].string),
'get_more_info': self.getMoreInfo
}
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def getMoreInfo(self, item):
full_description = self.getCache('publichd.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)
html = BeautifulSoup(full_description)

View File

@@ -1,5 +1,4 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import tryUrlencode, toUnicode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
@@ -27,30 +26,24 @@ class SceneAccess(TorrentProvider):
http_time_between_calls = 1 #seconds
def search(self, movie, quality):
results = []
if self.isDisabled():
return results
def _search(self, movie, quality, results):
url = self.urls['search'] % (
self.getCatId(quality['identifier'])[0],
self.getCatId(quality['identifier'])[0]
)
q = '%s %s' % (movie['library']['identifier'], quality.get('identifier'))
arguments = tryUrlencode({
'search': q,
'search': movie['library']['identifier'],
'method': 1,
})
url = "%s&%s" % (url, arguments)
# Do login for the cookies
if not self.login_opener and not self.login():
return results
return
cache_key = 'sceneaccess.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, url, opener = self.login_opener)
data = self.getHTMLData(url, opener = self.login_opener)
if data:
html = BeautifulSoup(data)
@@ -58,7 +51,7 @@ class SceneAccess(TorrentProvider):
try:
resultsTable = html.find('table', attrs = {'id' : 'torrents-table'})
if resultsTable is None:
return results
return
entries = resultsTable.find_all('tr', attrs = {'class' : 'tt_row'})
for result in entries:
@@ -66,38 +59,23 @@ class SceneAccess(TorrentProvider):
link = result.find('td', attrs = {'class' : 'ttr_name'}).find('a')
url = result.find('td', attrs = {'class' : 'td_dl'}).find('a')
leechers = result.find('td', attrs = {'class' : 'ttr_leechers'}).find('a')
id = link['href'].replace('details?id=', '')
torrent_id = link['href'].replace('details?id=', '')
new = {
'id': id,
'type': 'torrent',
'check_nzb': False,
'description': '',
'provider': self.getName(),
results.append({
'id': torrent_id,
'name': link['title'],
'url': self.urls['download'] % url['href'],
'detail_url': self.urls['detail'] % id,
'detail_url': self.urls['detail'] % torrent_id,
'size': self.parseSize(result.find('td', attrs = {'class' : 'ttr_size'}).contents[0]),
'seeders': tryInt(result.find('td', attrs = {'class' : 'ttr_seeders'}).find('a').string),
'leechers': tryInt(leechers.string) if leechers else 0,
'download': self.loginDownload,
'get_more_info': self.getMoreInfo,
}
})
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = False, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def getLoginParams(self):
return tryUrlencode({
'username': self.conf('username'),

View File

@@ -1,7 +1,6 @@
from bs4 import BeautifulSoup
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import simplifyString, tryUrlencode
from couchpotato.core.helpers.variable import getTitle, tryInt
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.torrent.base import TorrentProvider
import traceback
@@ -21,13 +20,9 @@ class SceneHD(TorrentProvider):
http_time_between_calls = 1 #seconds
def search(self, movie, quality):
def _searchOnTitle(self, title, movie, quality, results):
results = []
if self.isDisabled():
return results
q = '"%s %s" %s' % (simplifyString(getTitle(movie['library'])), movie['library']['year'], quality.get('identifier'))
q = '"%s %s" %s' % (simplifyString(title), movie['library']['year'], quality.get('identifier'))
arguments = tryUrlencode({
'search': q,
})
@@ -35,10 +30,9 @@ class SceneHD(TorrentProvider):
# Cookie login
if not self.login_opener and not self.login():
return results
return
cache_key = 'scenehd.%s.%s' % (movie['library']['identifier'], quality.get('identifier'))
data = self.getCache(cache_key, url, opener = self.login_opener)
data = self.getHTMLData(url, opener = self.login_opener)
if data:
html = BeautifulSoup(data)
@@ -52,7 +46,7 @@ class SceneHD(TorrentProvider):
detail_link = all_cells[2].find('a')
details = detail_link['href']
id = details.replace('details.php?id=', '')
torrent_id = details.replace('details.php?id=', '')
leechers = all_cells[11].find('a')
if leechers:
@@ -60,38 +54,20 @@ class SceneHD(TorrentProvider):
else:
leechers = all_cells[11].string
new = {
'id': id,
results.append({
'id': torrent_id,
'name': detail_link['title'],
'type': 'torrent',
'check_nzb': False,
'description': '',
'provider': self.getName(),
'size': self.parseSize(all_cells[7].string),
'seeders': tryInt(all_cells[10].find('a').string),
'leechers': tryInt(leechers),
'url': self.urls['download'] % id,
'url': self.urls['download'] % torrent_id,
'download': self.loginDownload,
}
imdb_link = all_cells[1].find('a')
imdb_results = self.imdbMatch(imdb_link['href'], movie['library']['identifier']) if imdb_link else False
new['score'] = fireEvent('score.calculate', new, movie, single = True)
is_correct_movie = fireEvent('searcher.correct_movie', nzb = new, movie = movie, quality = quality,
imdb_results = imdb_results, single = True)
if is_correct_movie:
results.append(new)
self.found(new)
return results
'description': all_cells[1].find('a')['href'],
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
return []
def getLoginParams(self, params):
return tryUrlencode({

View File

@@ -16,7 +16,7 @@ config = [{
{
'name': 'enabled',
'type': 'enabler',
'default': False
'default': True
},
{
'name': 'domain',

Some files were not shown because too many files have changed in this diff Show More