Compare commits

...

2014 Commits

Author SHA1 Message Date
Ruud
6772b9d965 Don't migrate when db is closed 2014-07-17 23:09:26 +02:00
Ruud
5df14d67e1 One up 2014-07-17 22:28:32 +02:00
Ruud
73abd1f022 Merge branch 'refs/heads/master' into desktop 2014-07-17 22:27:23 +02:00
Ruud
e75a8529c9 Try fix migration failure from 2.5.1 2014-07-17 22:26:23 +02:00
Ruud
07a7f8cbcf Change fanart api url 2014-07-16 10:32:02 +02:00
Ruud
9b35a0fb20 Only trigger onClose when it's set 2014-07-08 21:21:22 +02:00
Ruud
0622e6e5ab One up 2014-06-29 23:16:09 +02:00
Ruud
f16931906f Don't remove pyc files when using desktop updater 2014-06-29 23:15:36 +02:00
Ruud
68dcba8853 One up 2/2 2014-06-29 21:56:51 +02:00
Ruud
ae8f66df1a Exit main loop on crash 2014-06-29 21:56:39 +02:00
Ruud
5237ead5cb Merge branch 'refs/heads/develop' into desktop 2014-06-29 17:01:47 +02:00
Ruud
45b2dff6d2 Merge branch 'refs/heads/develop' 2014-06-29 11:01:09 +02:00
Ruud
4cbc089de2 Log subfolder errors in renamer 2014-06-29 10:51:33 +02:00
Ruud
c45c04659f Use html parser for hdtrailers 2014-06-29 10:24:19 +02:00
Ruud
61a9037835 Don't error out if XBMC is turned off. fix #3515 2014-06-29 09:49:48 +02:00
Ruud
30d56b5d2c Merge branch 'refs/heads/develop' 2014-06-29 00:02:55 +02:00
Ruud
ad33c0bcca Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-06-28 23:58:21 +02:00
Ruud
7afc524a9f Freespace Windows not working. fix #3535 2014-06-28 23:58:02 +02:00
Ruud Burger
c5a4bc9a1b Merge pull request #3534 from mano3m/develop_fix_torrentshack
Fix torrentshack
2014-06-28 23:55:47 +02:00
Ruud
1c0178dbaf Fix 'ignore' mis-tagging 2014-06-28 23:54:52 +02:00
Ruud
dbf7feca3e Properly delete from manage 2014-06-28 23:46:28 +02:00
mano3m
d92de8ec4e Fix torrentshack 2014-06-28 23:43:16 +02:00
Ruud Burger
8347da5a58 Merge pull request #3529 from genial123/api-fix
Finish non-existent API calls instead of timing out
2014-06-28 23:17:37 +02:00
Ruud
59e248d7de Wrong argument split. fix #3518 2014-06-28 22:57:42 +02:00
genial123
12e556e1d1 Finish non-existent API calls instead of timing out 2014-06-28 08:19:05 +02:00
Ruud
14d3ab93da Add mp4 quality brrip 2014-06-25 23:22:35 +02:00
Ruud
e27ece512f Use release quality, not identifier to match downloaded release 2014-06-25 23:08:37 +02:00
Ruud
b88d8efc8d Allow 720p in cam quality. fix #3512 2014-06-25 22:56:40 +02:00
Ruud
5ff6824ae9 Merge branch 'refs/heads/develop' 2014-06-25 18:26:10 +02:00
Ruud
9ec4c2837e Don't put original title first 2014-06-25 18:20:42 +02:00
Ruud
ffc3fc9ec4 Check for broken indexes and reindex if needed 2014-06-25 18:16:45 +02:00
Ruud
a566b4f428 Setup property index with database module 2014-06-25 16:13:35 +02:00
Ruud
69819460f3 Add zoink.it for torrent caching 2014-06-25 09:27:23 +02:00
Ruud
0210859155 Merge branch 'refs/heads/develop' 2014-06-25 09:17:12 +02:00
Ruud
24a8cb41fe Keep previous status if restatus check fails 2014-06-24 22:01:27 +02:00
Ruud
1de0443492 Get default "stop after" if it isn't set yet. fix #3499 2014-06-24 21:56:46 +02:00
Ruud
bb19b380b4 Don't start CP when less then 100MB is available. fix #3502 2014-06-24 21:20:13 +02:00
Ruud
b6b936ddf3 Use other name guess. fix #3501 2014-06-24 20:50:25 +02:00
Ruud
b00b6acba8 Profile don't save. fix #3437 2014-06-24 20:18:51 +02:00
Ruud
3941076c06 Forgot to add the separator to test 2014-06-24 10:09:34 +02:00
Ruud
7401201af2 Add subfolder path test 2014-06-24 10:02:43 +02:00
Ruud
5c586fbf30 Update isSubFolder test 2014-06-24 10:02:14 +02:00
Ruud
5c891b7e8e Try next on failed trailer download 2014-06-23 23:47:30 +02:00
Ruud
665478db13 Merge branch 'refs/heads/develop' 2014-06-23 23:45:03 +02:00
Ruud
5425fcae9e Manually get with_status releases 2014-06-23 23:40:36 +02:00
Ruud
4008cce12f Manually get media with status 2014-06-23 23:37:17 +02:00
Ruud
d227105527 Make keep search advanced 2014-06-23 22:04:42 +02:00
Ruud
508649e6b6 Optimize import 2014-06-23 21:51:14 +02:00
Ruud
b4e25d4345 Indent fixes 2014-06-23 21:50:23 +02:00
Ruud
733f925c75 Merge branch 'refs/heads/mano3m-develop_wait_for_better' into develop 2014-06-23 21:49:21 +02:00
mano3m
40e910192e Fix tagging 2014-06-23 21:14:00 +02:00
mano3m
424a3cd892 Clean-up 2014-06-23 21:13:59 +02:00
mano3m
9f6036c8d6 Redo status update for media 2014-06-23 21:13:58 +02:00
mano3m
5af5749d4a Catch missing deleted profile error
@RuudBurger should we reset the profile of the media to default or None
in case this happens or leave it the way it is?
2014-06-23 21:10:11 +02:00
mano3m
f01449f14c Rename scanned files for done media properly 2014-06-23 21:10:10 +02:00
mano3m
03dff14ee9 Massive bug fix 2014-06-23 21:10:09 +02:00
mano3m
e55302592a Improve description 2014-06-23 21:10:09 +02:00
mano3m
dbeaab052d Wait before marking media as done 2014-06-23 21:10:08 +02:00
Ruud
84c366ab54 Merge branch 'master' of github.com:RuudBurger/CouchPotatoServer 2014-06-23 20:47:30 +02:00
Ruud
908e5eae77 Merge branch 'refs/heads/develop' 2014-06-23 20:47:06 +02:00
Ruud
9f07dd5a21 Reindex after full scan. fix #3492 2014-06-23 20:46:26 +02:00
Ruud
b933cd8718 Delete when total releases was 0 2014-06-23 20:40:04 +02:00
Ruud
c4aaa10308 One up 2014-06-23 20:00:06 +02:00
Ruud
d10536a829 Remove path from getOptions 2014-06-23 20:00:00 +02:00
Ruud
1e7fa82e11 Merge branch 'refs/heads/develop' into desktop 2014-06-23 19:01:58 +02:00
Ruud
8d85dde2c6 Don't use empty name_year return for moviemeter. fix #3493 2014-06-23 16:19:40 +02:00
Ruud
1d448f3d9c Merge branch 'refs/heads/develop' 2014-06-23 14:29:20 +02:00
Ruud
eaaa8dc834 Only try other if it's different 2014-06-23 14:15:00 +02:00
Ruud
5350dbf0ce Filter out extended and try other result on determine media. fix #3489 2014-06-23 14:13:32 +02:00
Ruud
28ffad10ab Standardize path for list directory api call. #3487 2014-06-23 13:43:33 +02:00
Ruud
338b5f427a Merge branch 'refs/heads/develop' 2014-06-23 13:37:50 +02:00
Ruud
a37517bf6a Use ssl startup options. fix #3490
Thanks @sjmcinness
2014-06-23 13:37:13 +02:00
Ruud
fab9b96c8e Keep done releases when removing from wanted/dashboard. fix #3488 2014-06-23 13:16:05 +02:00
Ruud
59e3e73c4c Merge branch 'refs/heads/develop' 2014-06-23 01:19:05 +02:00
Ruud
50d6882a98 Close all attached after start 2014-06-23 01:17:06 +02:00
Ruud
94064ac7da Rework restart methods 2014-06-23 01:09:32 +02:00
Ruud
1c5f19a68a Better reload hook name 2014-06-22 23:41:31 +02:00
Ruud
a26abd0dbb Don't use nonblock requests results if empty 2014-06-22 23:39:43 +02:00
Ruud
fb9080c18a Except value error 2014-06-22 23:38:51 +02:00
Ruud
15980471b0 Create api lock on the fly 2014-06-22 22:41:56 +02:00
Ruud
b11bb9cdac Catch missing profile in restatus 2014-06-22 21:32:52 +02:00
Ruud
cb2614127c Merge branch 'refs/heads/develop' 2014-06-22 21:14:44 +02:00
Ruud
474cd45fc5 Reset profile
to default when old one is empty or doesn't exist anymore
2014-06-22 21:14:31 +02:00
Ruud
0b6843a1b9 Force readd not adding with proper profile 2014-06-22 20:58:56 +02:00
Ruud
fdbd826917 Merge branch 'refs/heads/develop' 2014-06-22 20:35:30 +02:00
Ruud
fdcdf07fa6 Untag on delete from dashboard 2014-06-22 20:35:19 +02:00
Ruud
5617953d39 Mark as done missing. #3472 2014-06-22 20:23:47 +02:00
Ruud
964144996f Advanced not hidden. 2014-06-22 16:17:11 +02:00
Ruud
37214dd413 Put Pushover in config. close #3480 2014-06-22 16:15:36 +02:00
Ruud
5a08fed0b6 Manage release_id not assigned. fix #3479 2014-06-22 15:53:43 +02:00
Ruud
443866ef04 Use default title for search query. fix #3477 2014-06-21 18:50:36 +02:00
Ruud
96275adaff Use always search and ignore ETA. fix #3475 2014-06-21 18:44:09 +02:00
Ruud
31daf4915e Merge branch 'refs/heads/develop' 2014-06-20 21:31:48 +02:00
Ruud
33884deb6c Send single Pushbullet when no device is selected. fix #3471 2014-06-20 21:29:43 +02:00
Ruud
4ca7691afd Merge branch 'refs/heads/develop' 2014-06-20 21:08:33 +02:00
Ruud
7db291fc93 Show all in wizard 2014-06-20 21:07:17 +02:00
Ruud
9df14bd55a Cleanup provider lists 2014-06-20 21:04:24 +02:00
Ruud
1e183625c9 Description update 2014-06-20 20:45:26 +02:00
Ruud
643be19711 Update descriptions 2014-06-20 20:45:17 +02:00
Ruud
21a1770f3f Nzb icons 2014-06-20 18:14:08 +02:00
Ruud
07063d855a Add icons to torrent providers 2014-06-20 17:35:15 +02:00
Ruud
cf95e417f1 Delete publichd 2014-06-20 17:11:11 +02:00
Ruud
64d3ecd9b8 Merge branch 'refs/heads/develop' 2014-06-20 14:52:15 +02:00
Ruud
3f92ed0ea0 Don't autodownload releases with no file size. fix #3467 2014-06-20 14:33:20 +02:00
Ruud
d55df3240f Merge branch 'refs/heads/develop' 2014-06-20 14:14:26 +02:00
Ruud
578b74f2c0 Fix PushBullet url. fix #3470 2014-06-20 14:14:10 +02:00
Ruud
8e17b9aea5 Remove BoxCar 2014-06-20 14:12:43 +02:00
Ruud
52214e4938 Merge branch 'refs/heads/develop' 2014-06-20 12:22:13 +02:00
Ruud
6f766aae8c Tag and untag dashboard media 2014-06-20 12:13:54 +02:00
Ruud
5797348bb3 Update tag index 2014-06-20 12:10:40 +02:00
Ruud
57ca5067ff Insert themoviedb original_title in by default. 2014-06-19 16:48:30 +02:00
Ruud
e8ff8a41de Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-06-19 14:15:53 +02:00
Ruud
0b5dfe826a Fix BitSoup parsing. close #3465 2014-06-19 14:15:42 +02:00
Ruud
67fbcc8238 Tag filter index 2014-06-17 23:32:27 +02:00
Ruud
dd61c7dc21 Compact DB every 7 days if needed 2014-06-17 16:27:22 +02:00
Ruud
3786b5435f Only remove movie title from 3d words check 2014-06-17 15:36:18 +02:00
Ruud
1857e047b0 Remove moviename words when scanning for 3d tags. fix #3395 2014-06-17 15:22:20 +02:00
Ruud
648ac7793f Add multiple 3d tags to clean regex 2014-06-17 15:21:30 +02:00
Ruud
664ce6421f Try only parse filename for release name 2014-06-17 15:21:08 +02:00
Ruud
cfb77a1076 Don't use extension to test for quality tags. fix #3457 2014-06-17 14:22:42 +02:00
Ruud
f65ddbbb9e Encode environments args in html 2014-06-16 22:39:02 +02:00
Ruud
76126271fc Don't add default profile if status is done 2014-06-16 22:06:39 +02:00
Ruud
3faece0b4c Don't log already deleted releases. 2014-06-16 21:21:55 +02:00
Ruud
530d3cd91e Update rentals URL 2014-06-15 22:37:18 +02:00
Ruud
e659aba176 Clean .pyc files before starting 2014-06-15 22:22:34 +02:00
Ruud
a196a499ae Only cache qualities if list length is correct 2014-06-15 22:13:01 +02:00
Ruud
58bd9cd7a1 Unable to hide & reorder profiles. fix #3437 2014-06-15 14:59:02 +02:00
Ruud
9dd9f850c6 Treat seeding as "done" 2014-06-14 18:59:52 +02:00
Ruud
cbecb74307 Show ETA on soon list. fix #2702 2014-06-14 18:57:27 +02:00
Ruud
b45307e493 Merge branch 'refs/heads/develop' 2014-06-11 23:51:05 +02:00
Ruud
8ae1e58614 Don't call parent init for synoindex 2014-06-11 23:34:17 +02:00
Ruud
83e8ae392d Don't create a new "done" release on rename. fix #3250 2014-06-11 23:17:59 +02:00
Ruud
c0297f10cb Force download on "best release" selection 2014-06-11 22:24:11 +02:00
Ruud
41052ae508 Use same before ETA message 2014-06-11 22:14:13 +02:00
Ruud
2d243d51e4 Ignore ETA on manual refresh 2014-06-11 22:03:38 +02:00
Ruud
fdec80f676 Set last_force_eta time 2014-06-11 21:21:32 +02:00
Ruud
5d3b0deb4d Simpler progress update 2014-06-11 21:10:20 +02:00
Ruud
f68c356944 Update title index 2014-06-11 21:07:02 +02:00
Ruud
553f8d6ccd Ignore ETA every 7 days on search 2014-06-11 17:05:08 +02:00
Ruud
60fb3e33ae Sony PS3 metadata 2014-06-11 15:22:07 +02:00
Ruud
9b7c1db509 Sony PS3 metadata 2014-06-11 15:20:21 +02:00
Ruud
963ce356fb MediaBrowser metadata 2014-06-11 15:19:25 +02:00
Ruud
dcd0364ecc Re-use tiny scroller for webkit 2014-06-11 14:59:06 +02:00
Ruud
a2da428777 Chart css cleanup
Tiny webkit scroll
2014-06-11 14:40:22 +02:00
Ruud
876c602710 Code cleanup 2014-06-11 12:29:31 +02:00
Ruud
79cb716ced Update Mootools 2014-06-11 10:34:52 +02:00
Ruud
4320369448 Merge branch 'refs/heads/develop' 2014-06-11 10:15:31 +02:00
Ruud
ba9c975335 Allow empty quality 2014-06-11 10:11:36 +02:00
Ruud
ef407bcb3c Don't clear pyc when develop 2014-06-11 09:53:52 +02:00
Ruud
2898a066fe Prevent threading from GC before proper close. fix #3420 2014-06-11 09:49:30 +02:00
Ruud
7950c4bdb4 Update fedora service init 2014-06-11 09:34:06 +02:00
Ruud
2499012d88 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-06-11 09:31:43 +02:00
Ruud
7788669de1 Fill in profiles & qualities when they are empty. fix #3396 2014-06-11 09:31:29 +02:00
Ruud
f560dc093c Merge branch 'refs/heads/develop' 2014-06-10 22:54:14 +02:00
Ruud
d7f6fad3dd Unicode filenames before saving release. fix #3383 2014-06-10 22:51:15 +02:00
Ruud
699c562d34 Return default resolution if nothing found 2014-06-10 22:47:23 +02:00
Ruud
36d8225389 Trigger search change on paste. fix #3416 2014-06-10 21:02:48 +02:00
Ruud
17ba9ee96b Allow full library refresh interval. fix #2807 2014-06-10 13:54:31 +02:00
Ruud
2769fc28d3 Catch RecordNotFound error. fix #3373 2014-06-10 13:40:51 +02:00
Ruud
f5f3cfba50 More general logs 2014-06-10 11:14:19 +02:00
Ruud
1b1c77d225 Use magnetprovider for yify #3406 2014-06-10 11:13:18 +02:00
Ruud
cfc49e286b Allowed datadir giving false positive. fix #3399 2014-06-08 11:57:53 +02:00
Ruud
d26a2b1480 Merge branch 'refs/heads/develop' 2014-06-07 20:44:49 +02:00
Ruud
a2b3677c59 Settings.save doc update. closes #3391 2014-06-07 08:48:11 +02:00
Ruud
e5cfafdb00 Update Tornado 3.2.2 2014-06-06 22:25:16 +02:00
Ruud
bff05925e8 Only allow 3d tag as single word, not partial. fix #3368 2014-06-06 21:37:16 +02:00
Ruud
05f4b2b8ce Allow full scan and quick scan separately 2014-06-06 21:36:35 +02:00
Ruud
2eac294643 Allow already deleted releases 2014-06-06 20:42:01 +02:00
Ruud
f6789f79ea Import cleanup 2014-06-06 20:14:57 +02:00
Ruud
0b5976bdb1 Catch HTTPError properly in trailer search. fix #3388 2014-06-06 18:51:51 +02:00
Ruud
7d2b2b9809 Metadata fixes 2014-06-06 18:09:17 +02:00
Ruud
cce92dc1f8 Don't test for redirect. fix #3381 2014-06-06 18:04:13 +02:00
Ruud
fa7e59e842 Don't save profile order twice 2014-06-06 17:26:54 +02:00
Ruud
e11b07b559 Don't save profile order twice 2014-06-06 17:26:45 +02:00
Ruud
b6ee8ef4d4 Merge branch 'refs/heads/develop' 2014-06-06 11:24:24 +02:00
Ruud
8635f0ddb2 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-06-06 11:23:46 +02:00
Ruud
c90a423012 Disable SSL verification 2014-06-06 11:23:21 +02:00
Ruud
f0daee669b Add opener to env 2014-06-06 10:53:08 +02:00
Ruud Burger
d252b660f5 Merge pull request #3362 from clinton-hall/patch-1
Fix instructions for Ubuntu
2014-06-04 08:18:41 +02:00
Clinton Hall
e717a49c0c Fix instructions for Ubuntu 2014-06-04 13:29:20 +09:30
Ruud
426155e65c Add extra score if size is unique. fix #3344 2014-06-03 23:34:32 +02:00
Ruud
6b9b446e3d Quality guess keyerror. fix #3347 2014-06-03 22:53:30 +02:00
Ruud
ab2b2cfe6e Cleaner empty dir cleanup 2014-06-03 22:49:12 +02:00
Ruud
f80559d380 Merge branch 'refs/heads/develop' 2014-06-03 22:31:20 +02:00
Ruud
4b236c6ed6 Only cleanup source folders 2014-06-03 22:25:17 +02:00
Ruud
2396fadf04 Remove debug print 2014-06-03 20:25:57 +02:00
Ruud
a3bffb5867 Use searchOnTitle for TorrentLeech. fix #3351 2014-06-03 17:32:53 +02:00
Ruud
1b44fc40af Properly delete from late-list. fix #3350 2014-06-03 17:26:38 +02:00
Ruud
8530b00e7b Merge branch 'refs/heads/develop' 2014-06-03 17:18:11 +02:00
Ruud
b894139ca1 Make full path for logs 2014-06-03 16:54:21 +02:00
Ruud
daa0662869 XMPP was importing itself 2014-06-03 16:54:08 +02:00
Ruud
81de9529c3 Force folder creation on startup 2014-06-03 16:52:44 +02:00
Ruud
5851e1e69f Merge branch 'refs/heads/develop' 2014-06-02 23:51:01 +02:00
Ruud
6b06caf00d Api call release lock never triggered 2014-06-02 22:57:45 +02:00
Ruud
9370366112 Don't limit fanart calls 2014-06-02 22:57:27 +02:00
Ruud
32bcf6e615 Requests 2.3.1 2014-06-02 22:36:09 +02:00
Ruud
aa804471a7 Prioritize image info 2014-06-02 22:27:56 +02:00
Ruud
681d8b1ddc Simplify fanart provider 2014-06-02 22:23:26 +02:00
Ruud
686bfd62eb Merge branch 'refs/heads/develop' 2014-06-02 15:10:29 +02:00
Ruud
c82b1f51e3 Get messages from last 7 days, not just unread. fix #3331 2014-06-02 14:44:09 +02:00
Ruud
6d048e0003 Don't try to parse faulty IMDB page 2014-06-02 14:35:06 +02:00
Ruud
9b82603c26 Merge branch 'refs/heads/develop' 2014-06-02 14:20:50 +02:00
Ruud
0314910bbe Don't migrate empty library items 2014-06-02 14:11:26 +02:00
Ruud
3bd831782c Release lock inside thread 2014-06-02 14:02:55 +02:00
Ruud
40f01dca6f Use async request for all api calls 2014-06-02 13:31:18 +02:00
Ruud
f41792915f Merge branch 'refs/heads/develop' 2014-06-02 12:59:47 +02:00
Ruud
8dead66b58 Migration fixes 2014-06-02 12:59:21 +02:00
Ruud
2fa77fb610 Merge branch 'refs/heads/develop' 2014-06-02 10:40:07 +02:00
Ruud
18807191c0 Don't reindex on startup 2014-06-01 17:36:34 +02:00
Ruud
9d9630a27a Sorted backup files 2014-06-01 16:46:21 +02:00
Ruud
8ac851555d Can't trigger same api call
Thread never closes
2014-06-01 16:14:53 +02:00
Ruud
e64d0e33fc Merge branch 'refs/heads/develop' 2014-06-01 14:31:39 +02:00
Ruud
27f331a1fc Don't verify ssl for downloaders 2014-06-01 14:30:45 +02:00
Ruud
e6b4d32506 IMDB Watchlist count was off 2014-06-01 11:37:57 +02:00
Ruud
a28ee58a1f Remove digestauth header 2014-06-01 00:26:37 +02:00
Ruud
47749c2d73 Transmission login failed. #1110 2014-06-01 00:23:46 +02:00
Ruud
d6d0ff724a Change label 2014-06-01 00:11:09 +02:00
Ruud
ba65700aad Use textarea value for log posting 2014-05-31 23:46:34 +02:00
Ruud
b168643600 Merge branch 'refs/heads/develop'
Conflicts:
	couchpotato/core/helpers/variable.py
2014-05-31 22:50:02 +02:00
Ruud
84a7cfe07d Add CP version by default in logs 2014-05-31 22:09:18 +02:00
Ruud
9ccd4a5e84 Shutdown logging 2014-05-31 21:56:37 +02:00
Ruud
616434a00f Delay release cleanup 2014-05-31 21:31:06 +02:00
Ruud
4cf62f73da Use proper conf variable 2014-05-31 19:51:58 +02:00
Ruud
0145aecab4 data['size'] sometimes doesn't exist 2014-05-31 13:53:45 +02:00
Ruud
6c4184d1f5 Use minimal requirements for popular movie automation 2014-05-31 13:37:30 +02:00
Ruud
9d011b42a9 Moved PopularMovies automation to single file 2014-05-31 13:30:40 +02:00
Ruud
bf81b5cacc Move automation provider 2014-05-31 13:24:23 +02:00
Ruud
8d2b6e4097 Merge branch 'refs/heads/sjlu-develop' into develop 2014-05-31 13:22:18 +02:00
Ruud
50d8399f09 Merge branch 'develop' of git://github.com/sjlu/CouchPotatoServer into sjlu-develop 2014-05-31 13:21:58 +02:00
Ruud
bc99b77dbe Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-05-31 13:18:48 +02:00
Ruud
1c7edc9487 TPB and Kickass proxy update 2014-05-31 13:18:11 +02:00
Ruud Burger
90c06fb3c9 Merge pull request #3290 from kackar/patch-4
Update contributing.md
2014-05-31 13:14:17 +02:00
Ruud Burger
10a04c16ba Merge pull request #3289 from kackar/patch-3
Update README.md
2014-05-31 13:13:29 +02:00
Ruud
90a618bd7e Allow search on imdb urls 2014-05-31 12:33:35 +02:00
Ruud
b630b84ab0 Get proper poster image from tmdb 2014-05-31 12:09:55 +02:00
Ruud
a5ee362fc0 Remove scandir lib, use os.walk 2014-05-31 11:53:24 +02:00
Ruud
7c0870b6b8 Capitalize 2014-05-29 20:44:15 +02:00
Ruud
a42264b280 Tornado 3.2.1 2014-05-29 20:09:16 +02:00
Ruud
e714604ec0 Requests 2.3.0 2014-05-29 20:01:22 +02:00
Ruud
c094120f04 Do verify requests 2014-05-29 19:50:36 +02:00
Ruud
6691c8ddd7 Convert underscore method to proper camelcase 2014-05-29 19:43:33 +02:00
kackar
013705c318 Update contributing.md 2014-05-25 21:40:49 +02:00
kackar
bda6f92a4d Update README.md 2014-05-25 21:34:59 +02:00
Ruud
7ceb8dc79c ILoveTorrents search fix 2014-05-19 21:49:08 +02:00
Ruud
7f48210c97 Use libs as import 2014-05-19 21:16:58 +02:00
Ruud
23c440cd58 Merge branch 'refs/heads/Boehemyth-develop' into develop 2014-05-19 21:07:56 +02:00
Ruud
0097167dec Fanart PR cleanup 2014-05-19 21:07:51 +02:00
Ruud
21e5f156bb Merge branch 'develop' of git://github.com/Boehemyth/CouchPotatoServer into Boehemyth-develop 2014-05-19 19:41:31 +02:00
Ruud
08f55314d5 Re-use imdb page parser 2014-05-19 19:25:16 +02:00
Ruud
577bf09859 Add fallback imdb filter.
BeautifulSoup fails to load elements based on ID for the html imdb returns...
Hacky way of filtering out the correct elements.
2014-05-19 13:35:44 +02:00
Ruud Burger
c446cd2fb0 Merge pull request #3252 from mano3m/develop_notif
Add 3D to download notification. fixes #3242
2014-05-19 10:28:43 +02:00
mano3m
06a8414f12 Add 3D to download notification. fixes #3242 2014-05-13 20:23:43 +02:00
Ruud
1ac01456a9 Don't show snatched movies in late section. fix #3013 2014-05-11 20:18:50 +02:00
Ruud
b86853f06f More path encoding 2014-05-11 19:36:22 +02:00
Ruud
311a2798dd Revert "Encode before path join"
This reverts commit b87c00c041.
2014-05-11 18:58:14 +02:00
Ruud
fe9998fb9d Revert "Don't re-encode by filesystem encoding"
This reverts commit d5e19db5e6.
2014-05-11 18:53:13 +02:00
Ruud
ce648c5d35 Post mass edit commands 2014-05-11 18:35:38 +02:00
Ruud
5a2a9bbf9a Loop through release files properly 2014-05-11 17:40:55 +02:00
Ruud
0f8ab05fd4 Clean up managed movies 2014-05-11 17:40:43 +02:00
Ruud
b87c00c041 Encode before path join 2014-05-11 17:18:27 +02:00
Ruud
8999f51dc9 Make quality guess debug message 2014-05-11 16:07:23 +02:00
Ruud
d5e19db5e6 Don't re-encode by filesystem encoding 2014-05-11 16:03:27 +02:00
Ruud
675bee83ca Path encode 2014-05-11 16:02:49 +02:00
Ruud
33e5dd1fdb Speed up log highlight
Allow reverse selection
2014-05-11 00:52:55 +02:00
Ruud Burger
4ff2794c83 Merge pull request #3234 from mikke89/fix-providers
Fix searches on torrentday and sceneaccess
2014-05-11 00:30:13 +02:00
Ruud
81f9302da1 Use super threaded db connection 2014-05-11 00:19:05 +02:00
Ruud
93f4b8b537 Don't log non existing properties 2014-05-11 00:14:56 +02:00
mikke89
0587d2f8db Fix searches on torrentday and sceneaccess 2014-05-10 22:26:41 +02:00
Ruud
6ba25b5468 Better highlight 2014-05-10 15:58:29 +02:00
Ruud
cc10969506 Keep log filter
Pre-fill in issue when possible
2014-05-10 15:53:27 +02:00
Ruud
c2eb50a7ee Log reporting 2014-05-10 13:23:13 +02:00
Dan Boehm
33d24068fd Merge remote-tracking branch 'upstream/develop' into develop 2014-05-09 13:04:36 -05:00
Ruud
3a4c191b11 Make logs filterable 2014-05-09 16:53:25 +02:00
Ruud
e06b4ccb3f Ignore "wait for" for all if 1 is old enough 2014-05-09 15:53:37 +02:00
Ruud
3c6b86ea28 Delay first search 2014-05-09 15:53:07 +02:00
Ruud
c4a9a13d6c Don't continue searching lower qualities of correct one is found 2014-05-09 14:30:06 +02:00
Ruud
c0f1a3c603 Show chart scrollbar only on hover 2014-05-09 12:14:29 +02:00
Ruud
9d3425061a Resize thumbnail-less soon movies 2014-05-09 12:04:07 +02:00
Ruud
c2dcd2f67d Use the "wait for" option properly. fix #3224 2014-05-09 11:46:32 +02:00
Ruud
24b822aecd Info2 log 2014-05-09 11:44:58 +02:00
Ruud
a7d3de766f Don't migrate release if quality doesn't exist 2014-05-09 00:58:52 +02:00
Ruud
b56c897e4b Don't give negative score for non matching size 2014-05-08 23:45:35 +02:00
Ruud
df14032107 Add offset for log partial 2014-05-08 16:59:41 +02:00
Ruud
66b4821f7f Profile references before assigned. fix #3220 2014-05-08 16:40:27 +02:00
Ruud
d301cde266 Newznab custom tag wasn't used. fix #3219 2014-05-08 16:37:36 +02:00
Ruud
0590a0d722 Update log api 2014-05-08 16:30:58 +02:00
Ruud
fc71a03a12 Just loop over log array 2014-05-08 16:27:13 +02:00
Ruud
923c794e39 Logs return list 2014-05-08 16:16:42 +02:00
Ruud
e7fbff5b3f Only remove non-existing releases only once 2014-05-08 15:40:24 +02:00
Ruud
1bd556fbb3 Close DB on shutdown 2014-05-08 15:39:40 +02:00
Ruud
18a870f8c3 Log if no quality is found 2014-05-08 14:57:26 +02:00
Ruud
3e2a2c3bee Remove unused variable 2014-05-08 14:52:03 +02:00
Ruud
73e74881a6 Always return handler 2014-05-08 14:51:53 +02:00
Ruud
b37112600e Only cache ignored proxies for 1 day 2014-05-08 14:51:42 +02:00
Ruud
6172ce4960 Use contains other quality in log 2014-05-08 14:51:26 +02:00
Ruud
3d277e1c01 Quality scoring and tests 2014-05-08 14:51:12 +02:00
Ruud
b3b13899f1 Return found qualities 2014-05-08 14:50:59 +02:00
Ruud
7c4a59539a Return brrip in Yify provider 2014-05-08 14:50:04 +02:00
Ruud
240283405e variable 'year' referenced before assignment 2014-05-07 11:50:36 +02:00
Ruud
e6dfb3da16 Allow last year films to be search after April 2014-05-07 00:47:50 +02:00
Ruud
8e220ededa Bitsoup: Allow param in search url 2014-05-07 00:25:18 +02:00
Ruud
11126f8083 forceDefaults priority 2014-05-07 00:23:08 +02:00
Ruud
ac8a13db22 Remove orphaned releases 2014-05-06 23:49:34 +02:00
Ruud
5ab10ff97a Change dognzb default url 2014-05-06 22:24:58 +02:00
Ruud
f3b0346ba2 Use encoding as backup 2014-05-06 21:38:08 +02:00
Ruud
96c94f97f4 Filter out tvshows in charts 2014-05-06 20:35:00 +02:00
Ruud
192c0200e5 Disable top 250 chart 2014-05-06 20:06:06 +02:00
Ruud
03ae8f459c Merge branch 'refs/heads/mano3m-develop_higherq' into develop 2014-05-06 16:08:47 +02:00
Ruud
377fdd9e5e Use correct event 2014-05-06 16:08:36 +02:00
Ruud
daec7d20fe Merge branch 'develop_higherq' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_higherq 2014-05-06 16:00:16 +02:00
Ruud
66a149590b Chart fixes 2014-05-06 15:51:09 +02:00
Ruud
1b6f010df2 Merge branch 'refs/heads/mano3m-develop_rentals' into develop 2014-05-06 15:39:44 +02:00
Ruud
7e4bc29b59 Chart cleanup 2014-05-06 15:39:41 +02:00
Ruud
0284fa9b0a Load correct beautifulsoup module 2014-05-06 14:31:24 +02:00
Ruud
e5bcea59b5 Merge branch 'develop_rentals' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_rentals 2014-05-06 14:14:41 +02:00
Ruud
16f603ced2 Cast float 2014-05-05 22:42:14 +02:00
Ruud
bdcb3b7e33 Merge branch 'refs/heads/mano3m-develop_3D_stuff' into develop 2014-05-05 22:36:45 +02:00
Ruud
0def6fcfe3 Cleanup PR 2014-05-05 22:36:41 +02:00
Ruud
75a352fef3 Merge branch 'develop_3D_stuff' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_3D_stuff 2014-05-05 22:13:36 +02:00
Ruud
07eb1f7f4c Allow page ordering 2014-05-05 22:01:11 +02:00
Ruud
8e35c02763 Cleanup searchOnTitle queries 2014-05-05 21:18:30 +02:00
Ruud
c1f6d9a858 Merge branch 'refs/heads/mikke89-dev-torrentsearch' into develop 2014-05-05 20:45:04 +02:00
Ruud
3e20a3bac7 Merge branch 'dev-torrentsearch' of git://github.com/mikke89/CouchPotatoServer into mikke89-dev-torrentsearch 2014-05-05 20:44:34 +02:00
Ruud
818570fd2d Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-05-05 20:42:22 +02:00
Ruud
bcd2d22fbf Update bitsoup searchOnTitle 2014-05-05 20:38:47 +02:00
mikke89
ffc99cd4f4 Fix false positives from Sceneaccess and Torrentday 2014-05-04 23:23:45 +02:00
Ruud Burger
bb56750c1a Merge pull request #3143 from mano3m/develop_charts
Improve charts
2014-05-04 14:50:15 +02:00
Ruud Burger
b08d587a22 Merge pull request #3166 from harrv/develop
Added an mpaa_only rename replacement token
2014-05-04 13:58:37 +02:00
Ruud Burger
47f4132b39 Merge pull request #3188 from rfgamaral/duplicate_subtitle_identifier
Avoid duplicate subtitle language identifier
2014-05-04 13:48:06 +02:00
Ruud Burger
faefab5554 Merge pull request #3189 from rfgamaral/force_download_subtitles
Force download all subtitle languages
2014-05-04 13:44:39 +02:00
Ruud Burger
243a033055 Merge pull request #3192 from mano3m/develop_yifi
Fix Yifi
2014-05-04 13:42:52 +02:00
Ruud Burger
db1eeaae38 Merge pull request #3193 from mano3m/develop_api
Add Blu-ray to Bit HDTV provider
2014-05-04 13:41:17 +02:00
mano3m
8c2960e891 Add Blu-ray to Bit HDTV provider
Fixes #3171
2014-05-04 11:14:54 +02:00
mano3m
d6a86e8616 correct caps 2014-05-04 10:11:25 +02:00
mano3m
5260f42378 Fix Yifi
this should fix #3114
2014-05-04 10:03:30 +02:00
Ricardo Amaral
84f28f3c54 Add advanced option to force download all languages 2014-05-03 17:50:12 +01:00
Ricardo Amaral
860b6793fb Prevent duplicate subtitle language identifier 2014-05-03 17:44:03 +01:00
harrv
df03409d7a Added an mpaa_only rename replacement token
The mpaa replacement token includes certifications from around the world. If the user wishes to limit the values to one of 'G', 'PG', 'PG-13', 'R', 'NC-17' or 'Not Rated' they can use the added mpaa_only replacement token. The original mpaa replacement token remains unchanged.
2014-04-27 00:47:38 -06:00
Dan Boehm
6a81f2241d Added option to run the Artwork Downloader addon during XBMC notify.
This option will only work in XBMCv12 (Frodo) or later.  It also requires the Artwork Downloader
Addon.

Since XBMC's API doesn't support notifications over HTML, there is no way for couchpotato to know
when the Library Scan is complete.  Since running the Artwork Downloader before the movie has
been scanned won't solve anything, a delay timer can be adjusted to suit the user's needs.

Squashed commit of the following:

commit bd60ed585f77cc40c31fd67d4ae732e0845d31ab
Merge: fcb092e b113a4d
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Thu Apr 24 14:26:24 2014 -0500

    Merge branch 'fanarttv' into artdlnotify

commit b113a4def197a9ca8545bde9f5081c0591b93b36
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Thu Apr 24 14:24:12 2014 -0500

    Bug-fix and code cleanup.

    Fixed a bug where the movie.info event would crash if there aren't any pictures to scrape in
    fanart.tv.

commit fcb092e776e00ceabea016b3c26d9394e32d72b0
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Thu Apr 24 14:21:27 2014 -0500

    Option to run the artwork downloader addon during XBMC notify.

commit adf7a4675d472e9e95a316c6cccc681a52804f13
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 16:15:03 2014 -0500

    Added support for extrafanart.
    Also, the main fanart will be taken from fanart.tv unless one
    does not exist.

commit 1791e46c8602f40bb56fe0cf7ecb0607f35b4b12
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 15:13:14 2014 -0500

    Couchpotato now downloads extrathumbs from the extra tmdb backdrops if they exist.

    This commit made some major changes to the core image creation functionality that
    makes writing multiple images to folders possible.

commit c0858807873749dbc928c0260037138f51f894ca
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 12:18:53 2014 -0500

    Bug Fix & Implemented functionality to select bluray or dvd disc images.

    Currently, only blurays will be selected, unless there are no blurays.
    However, if a mechanism for determining the quality of the release is
    implemented, it would be simple to make this selection based on the
    quality.

commit 786751371d243f53d0f5c6f2c38d92288d8608ba
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 10:59:25 2014 -0500

    Fixed a bug where non-HD clearart and logos couldn't be downloaded.

commit feda8df483d13b5a5df3a869f25de8f2c7e6ffe3
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 10:12:31 2014 -0500

    Fixed some problems that were missed with the previous merge.

commit 5ddab6c40e69a5accc6c0336cd7485920ff82d8f
Merge: 7273abf ff46aa0
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 10:02:11 2014 -0500

    Merge branch 'develop' into fanarttv

    Conflicts:
    	couchpotato/core/media/movie/providers/info/themoviedb.py
    	couchpotato/core/providers/metadata/xbmc/__init__.py

commit 7273abf827735cf245711c3d3199a6a173a964aa
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Thu Feb 27 13:29:57 2014 -0600

    Downloads extra artwork from fanart.tv

    Downloads occur with correct filenaming when XBMC metadata is generated,
    but the image URLs are selected when the movie.info event is called.

commit 9080d9d749c7e1ddbdc78f7b37a3c5f83195d580
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Wed Feb 26 16:31:37 2014 -0600

    Added basic functionality for fanarttv provider.

    This should be mostly done and is based on the tvdb provider.

commit 1b39b246c2a9d65f9ef86c4e150a12d893e362c0
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Wed Feb 26 14:50:17 2014 -0600

    Updated fanarttv library with the correct package hierarchy
    (libs.fanarttv).

commit 8abb7c8f8ad3347900debb9f6a6d5a7acb7df396
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Wed Feb 26 13:12:48 2014 -0600

    Added fanart.tv API python library (lib.fanarttv).

    The upstream for this library is at
    https://github.com/z4r/python-fanart.
2014-04-24 15:02:29 -05:00
Dan Boehm
5ce817cee6 Support for downloading extra artwork from Fanart.tv (resolves #1023).
New image types include:
* clearart
* discart
* extrathumbs
* extrafanart
* logo
* banner
* landscape (16:9 Thumb)

There are a couple things that should be noted:
1. Only English images will be downloaded.
2. The fanart image is now downloaded from Fanart.tv if it can find one, otherwise it uses TMDB
like it used to.  This is because the images on Fanart.tv tend to be higher resolutions &
quality.
3. Since multiple extrathumbs and extrafanarts are downloaded into a subdirectory, subdirectories
are now supported for metadata file names.  The subdirectories will be automatically created if
they don't exist.
4. Bluray discart will always be preferred over DVD.  Ideally, it would prefer DVD versions for
SD quality movies, but I couldn't find an easy way to determine the quality from within the
plugin.  I suspect major changes would be needed to the plugin system in general in order to get
this to work.  If a user cares about the distinction, the best work-around is to not download
these in Couchpotato and run the Artwork Downloader addon from within XBMC.
5. A maximum of 4 extrathumbs and 20 extrafanarts will be downloaded.

Squashed commit of the following:

commit b113a4def197a9ca8545bde9f5081c0591b93b36
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Thu Apr 24 14:24:12 2014 -0500

    Bug-fix and code cleanup.

    Fixed a bug where the movie.info event would crash if there aren't any pictures to scrape in
    fanart.tv.

commit adf7a4675d472e9e95a316c6cccc681a52804f13
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 16:15:03 2014 -0500

    Added support for extrafanart.
    Also, the main fanart will be taken from fanart.tv unless one
    does not exist.

commit 1791e46c8602f40bb56fe0cf7ecb0607f35b4b12
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 15:13:14 2014 -0500

    Couchpotato now downloads extrathumbs from the extra tmdb backdrops if they exist.

    This commit made some major changes to the core image creation functionality that
    makes writing multiple images to folders possible.

commit c0858807873749dbc928c0260037138f51f894ca
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 12:18:53 2014 -0500

    Bug Fix & Implemented functionality to select bluray or dvd disc images.

    Currently, only blurays will be selected, unless there are no blurays.
    However, if a mechanism for determining the quality of the release is
    implemented, it would be simple to make this selection based on the
    quality.

commit 786751371d243f53d0f5c6f2c38d92288d8608ba
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 10:59:25 2014 -0500

    Fixed a bug where non-HD clearart and logos couldn't be downloaded.

commit feda8df483d13b5a5df3a869f25de8f2c7e6ffe3
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 10:12:31 2014 -0500

    Fixed some problems that were missed with the previous merge.

commit 5ddab6c40e69a5accc6c0336cd7485920ff82d8f
Merge: 7273abf ff46aa0
Author: Dan Boehm <dboehm.dev@gmail.com>
Date:   Wed Apr 23 10:02:11 2014 -0500

    Merge branch 'develop' into fanarttv

    Conflicts:
    	couchpotato/core/media/movie/providers/info/themoviedb.py
    	couchpotato/core/providers/metadata/xbmc/__init__.py

commit 7273abf827735cf245711c3d3199a6a173a964aa
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Thu Feb 27 13:29:57 2014 -0600

    Downloads extra artwork from fanart.tv

    Downloads occur with correct filenaming when XBMC metadata is generated,
    but the image URLs are selected when the movie.info event is called.

commit 9080d9d749c7e1ddbdc78f7b37a3c5f83195d580
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Wed Feb 26 16:31:37 2014 -0600

    Added basic functionality for fanarttv provider.

    This should be mostly done and is based on the tvdb provider.

commit 1b39b246c2a9d65f9ef86c4e150a12d893e362c0
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Wed Feb 26 14:50:17 2014 -0600

    Updated fanarttv library with the correct package hierarchy
    (libs.fanarttv).

commit 8abb7c8f8ad3347900debb9f6a6d5a7acb7df396
Author: dan <dan@DBoehm-Arch.danboehm>
Date:   Wed Feb 26 13:12:48 2014 -0600

    Added fanart.tv API python library (lib.fanarttv).

    The upstream for this library is at
    https://github.com/z4r/python-fanart.
2014-04-24 15:00:04 -05:00
mano3m
7cdf124f9d Improve charts
- Add a max height to each chart with a scrollbar
- Add advanced options to hide chart items already in wanted or library
(note that this can be done more efficiently...)
2014-04-21 16:18:51 +02:00
Ruud Burger
ff46aa0226 Merge pull request #3138 from mano3m/develop_kat
Add verified only option for kat
2014-04-21 15:21:37 +02:00
mano3m
669e331f6c Ruud's comments 2014-04-21 12:15:05 +02:00
mano3m
4179ba642b Various Fixes 2014-04-21 00:26:23 +02:00
mano3m
00954d98f7 Improve scanner
- Fix the disc and tag removal: they received the filename with capse
- add default metadata when resolution is known
2014-04-21 00:26:21 +02:00
mano3m
037e77860b Add 3D type to renamer (e.g. SBS, Half OU, etc) 2014-04-21 00:26:20 +02:00
mano3m
47e187449d Add use of size to scanner
And check if snatched quality is the same as what we detected
2014-04-21 00:26:19 +02:00
mano3m
06e9afbe69 Improve quality self test 2014-04-21 00:26:18 +02:00
mano3m
bfe8aa5f5f Add size to quality guessing
And cleanup searcher
2014-04-21 00:26:17 +02:00
mano3m
e51ddd7a50 BR-Disk detection fixes 2014-04-21 00:26:16 +02:00
mano3m
442552c024 fix debug msg 2014-04-21 00:26:16 +02:00
mano3m
ce4806df64 Add 3D renamer option 2014-04-21 00:26:15 +02:00
mano3m
0c2e65c92b Check for better quality
Actually check the quality profile order and determine:
- if the searcher needs to search for a certain quality
- if the renamer needs to rename a certain qualoty release

Fixes #3122
2014-04-21 00:25:27 +02:00
mano3m
b01aa2b385 Add verified only option for kat
Fixes ##3137
2014-04-20 22:54:43 +02:00
Ruud Burger
2e04890756 Merge pull request #3087 from softcat/develop
Added filmstarts.de userscript
2014-04-20 10:20:59 +02:00
Ruud Burger
1657857b4a Merge pull request #3131 from mano3m/develop_prefix
Add 'A' and 'An' to 'The' prefix
2014-04-20 10:20:25 +02:00
mano3m
72383592ba Clean-up 2014-04-20 10:18:44 +02:00
Ruud Burger
d093f935f9 Merge pull request #3130 from mano3m/develop_binsearch
Simplify binsearch result string
2014-04-20 10:09:02 +02:00
Ruud Burger
8cc7d101aa Merge pull request #3119 from mano3m/develop_tagging
Only tag existing files
2014-04-20 10:07:43 +02:00
Ruud Burger
f39eebbd22 Merge pull request #3118 from mano3m/develop_spotweb
Add password searching in spots from spotweb
2014-04-20 10:07:16 +02:00
Ruud Burger
3ac8bc738a Merge pull request #3105 from mano3m/develop_standardize_renamer
Use more standardized codec/source names
2014-04-20 10:06:46 +02:00
mano3m
0eac041a26 Add 'A' and 'An' to 'The' prefix
This was bothering me for a long time now ;) We do put The at the end
but not A nor An. Fixed now :)
2014-04-19 22:20:17 +02:00
mano3m
ab0f5daaf3 Simplify binsearch result string
fixes #3099
2014-04-19 21:17:44 +02:00
mano3m
b59a0f82ab Add IMDB rentals list to charts
This should add the IMDB rentals list to the charts and imdb automation.
This is actually a nice list as you can download the movies right away
instead of waiting until they release like with the rest of the imdb
charts.

The problem is that this does not work. And frankly I gave up. When I
type this in my python command window it works:

'''
from bs4 import BeautifulSoup
import urllib2

data = urllib2.urlopen('http://www.imdb.com/boxoffice/rentals')
html = BeautifulSoup(data)
result_div = html.find('div', attrs = {'id': 'main'})
'''

Then result_div contains the list of movies. In the code from this PR
result_div becomes None....?!?!?! @Ruudburger please help before I jump
off my building ;)
2014-04-19 21:13:32 +02:00
mano3m
9b75e6af5c Only tag existing files
Fixes #3088
2014-04-15 21:35:16 +02:00
mano3m
aa37f2b0ef Add password searching in spots from spotweb 2014-04-15 19:23:07 +02:00
Steven Lu
d22237a5cc Adding in a new source for automation. 2014-04-14 23:50:10 -04:00
Ruud Burger
26f5e8aa4b Merge pull request #3109 from mano3m/develop_provider
Provider fixes
2014-04-14 09:06:07 +02:00
Ruud Burger
9072c6cae0 Merge pull request #3112 from jonnsl/bluray_chart
Don't show duplicated results in the blu-ray releases chart.
2014-04-14 09:01:22 +02:00
Jonnathan
8739c1197f Don't show duplicated results in the blu-ray releases chart. 2014-04-14 03:56:59 -03:00
mano3m
a477973862 Provider fixes
Fixes #3097 #3086  #3106
2014-04-13 15:48:25 +02:00
mano3m
95ce26d261 Use more standardized codec/source names
Fixes #999
2014-04-12 18:27:59 +02:00
Joel Kåberg
8c934c1ca8 Merge pull request #3080 from fuzeman/feature/dev_rtorrent
[rtorrent] fixed how torrent status is determined
2014-04-09 14:48:18 +02:00
softcat
349d7d4866 Added filmstarts.de userscript 2014-04-08 13:44:03 +02:00
Dean Gardiner
f1ea8fa693 [rtorrent] fixed how torrent status is determined 2014-04-06 22:24:27 +12:00
Ruud
685210aee3 Nested media index 2014-04-05 21:18:09 +02:00
Ruud
ae42b62b3c Remove downloaders.js from clientscript 2014-04-05 16:39:31 +02:00
Ruud
7faa7c3dba Use correct super class 2014-04-05 12:48:36 +02:00
Ruud
eba36b6d57 Allow type option in listing 2014-04-05 11:52:10 +02:00
Ruud
84a2afe08f Refactor downloaders and pages 2014-04-05 11:30:23 +02:00
Ruud
98a85f6950 Charts cleanup 2014-04-05 09:54:24 +02:00
Ruud
c89c99b272 Don't refresh charts at startup 2014-04-04 19:06:20 +02:00
Ruud
3f16dbd09c Sort releases based on preferred method in api return 2014-04-04 17:52:33 +02:00
Ruud
e547851905 Failed deleting from wanted 2014-04-04 17:17:26 +02:00
Ruud
cbb0462948 Only list inactive downloadstatus support once 2014-04-04 17:10:57 +02:00
Ruud Burger
a185292578 Merge pull request #3052 from jeremiahelroy/develop
making the scanner follow symlinks
2014-04-04 15:58:14 +02:00
Ruud Burger
cec1f54cdd Merge pull request #3042 from mano3m/develop_update_unrar
Update unrar2 lib to 0.99.3
2014-04-04 15:56:34 +02:00
Ruud
0112a3141b Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-04-04 15:50:41 +02:00
Ruud
5f93b08c23 Merge branch 'refs/heads/mikke89-charts-v2' into develop 2014-04-04 15:50:32 +02:00
Ruud
ff0de896c4 Cleanup and default charts 2014-04-04 15:50:20 +02:00
Ruud Burger
6d98f67668 Merge pull request #3070 from mano3m/develop_fix_ignore
Check if folder exists in tagging
2014-04-04 14:59:33 +02:00
mikke89
5d5cf5cf29 Display charts (such as from imdb, blu-ray.com) on home page. 2014-04-04 06:42:50 +02:00
mano3m
610edea20e Re-add path 2014-04-03 22:46:22 +02:00
mano3m
8f4219a93c Check if folder exists in tagging
Fixes #3069
2014-04-03 22:41:38 +02:00
Ruud
9540ae5a19 Hotlink userscript gif 2014-04-01 20:48:58 +02:00
Ruud
0f7c3f5d0f Use correct id returned from automation add. fix #3050 2014-04-01 20:38:27 +02:00
Ruud
39fb3a1107 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-04-01 20:34:04 +02:00
Ruud
e609931d2c Show off browser extension 2014-04-01 20:33:47 +02:00
Ruud Burger
70d94cda8c Merge pull request #3051 from bazbjzy/develop
Added custom sounds ability to Pushover Advanced Settings
2014-04-01 08:07:22 +02:00
jeremiahelroy
5c89a52f23 making the scanner follow symlinks 2014-04-01 00:48:08 -04:00
bazbjzy
686e0a9441 Removed empty line 2014-03-31 18:04:02 -07:00
bazbjzy
e8dcf5ee02 Added ability to configure Pushover custom sounds. 2014-03-31 17:59:14 -07:00
Ruud
95369e79a5 Add profile to returned in_wanted values 2014-03-31 00:31:19 +02:00
Ruud
eb0a8454bc Update description text 2014-03-30 23:35:18 +02:00
Ruud
4f059c2549 Point to browser extension 2014-03-30 23:32:56 +02:00
Ruud
fb7dbd5716 Update Userscript 2014-03-30 23:26:14 +02:00
Ruud Burger
07fc4b3728 Merge pull request #3043 from fuzeman/develop_renamer
Fixed release files bug in renamer
2014-03-30 17:52:53 +02:00
Ruud Burger
2d5b02baf9 Merge pull request #3040 from fuzeman/develop_iptorrents
Fixed searching bug in IPT provider
2014-03-30 17:50:41 +02:00
Ruud
f8b2547a45 Movie add faulty parameter. fix #3020 2014-03-30 17:33:44 +02:00
Dean Gardiner
f8cc8acfec Fixed release files bug in renamer 2014-03-30 21:54:44 +13:00
Dean Gardiner
17787c5a4f Fixed searching bug in IPT provider 2014-03-30 20:58:10 +13:00
Ruud
304de5adb6 Load correct html files 2014-03-29 23:47:35 +01:00
Ruud
46db38c5bf Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-03-29 23:44:41 +01:00
Ruud
99e77e409a Spring cleanup 2014-03-29 23:44:14 +01:00
mano3m
6152ddbd5f Unrar cleanup 2014-03-29 21:39:38 +01:00
mano3m
f99a94d685 Update unrar2 lib to 0.99.3
Fixes #2930
2014-03-29 21:25:26 +01:00
Ruud Burger
47f58ff45f Merge pull request #3039 from fuzeman/feature/dev_rtorrent
[rtorrent] Removed broken ratio group usage
2014-03-29 08:25:36 +01:00
Dean Gardiner
f225066130 [rtorrent] Removed broken ratio group usage 2014-03-29 01:21:05 +13:00
Ruud
83c5d701b3 Use release as dict not class 2014-03-24 22:14:47 +01:00
Ruud
ffb3359e66 Compact and reindex database api calls 2014-03-24 21:41:48 +01:00
Ruud
0861b21532 Make sure title is valid when adding it to index 2014-03-24 21:40:05 +01:00
Ruud
e7420367f1 Add Torrentz provider 2014-03-23 21:58:08 +01:00
Ruud
1998c779c7 Add more trackers 2014-03-23 21:50:37 +01:00
Ruud
93eb33811a Use correct default type 2014-03-23 21:50:24 +01:00
Ruud
d7bf9dba01 Skip leftover release info 2014-03-23 20:14:10 +01:00
Ruud
d5c6942266 Use media_id to get movie in renamer 2014-03-23 17:55:33 +01:00
Ruud
e870fab277 Scanner didn't use correct get key to determine movie 2014-03-23 17:55:12 +01:00
Ruud
f3ae63c7a9 Don't extend release_download when none is set 2014-03-23 17:50:49 +01:00
Ruud Burger
3df1f1b153 Merge pull request #3009 from wouter0100/patch-1
Fixed first item in quality group
2014-03-23 15:53:11 +01:00
Wouter van Os
74fd7c684e Fixed first item in quality group
First item within a quality group had always 3d on "true".
2014-03-23 15:18:18 +01:00
Ruud
745b262800 Don't check 3d checkbox on add 2014-03-22 23:10:31 +01:00
Ruud
72f6516a1c Fix handle image url 2014-03-22 23:01:03 +01:00
Ruud
7bb723d6b3 Key errors 2014-03-22 22:39:26 +01:00
Ruud
2ccdc8ffdc Allow 3d categories 2014-03-22 22:00:26 +01:00
Ruud
1cabf64993 Add 3d categories 2014-03-22 21:32:27 +01:00
Ruud
c55bd5a35d Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-03-22 21:06:59 +01:00
Ruud
77a3552797 Add 3d support for searching 2014-03-22 21:03:19 +01:00
jkaberg
81efd4bce7 [qbittorrent] minor cleanup 2014-03-22 18:35:11 +01:00
Ruud
98183ccc1e Sorted releases 2014-03-22 14:38:58 +01:00
Ruud Burger
09df863b6c Merge pull request #3004 from ramon86/develop
Added FilmCentrum.nl as a provider for the Chrome extension
2014-03-22 14:33:46 +01:00
Ramon van Dam
4e70c1882b Added FilmCentrum.nl as a provider for the Chrome extension 2014-03-22 14:31:29 +01:00
Ruud
d38d581d1d Add 3d to quality tags on movie item 2014-03-22 12:36:11 +01:00
Ruud
61c95240c2 Failed editing movie 2014-03-22 12:35:56 +01:00
Ruud
59347400c3 Add 3D tags 2014-03-22 12:20:16 +01:00
Ruud
f976e04597 Save 3d in quality profile 2014-03-22 12:19:42 +01:00
Ruud
1602fe88e6 Close connection not cursor 2014-03-22 09:59:22 +01:00
Ruud
d4eca60b1d Identifiers fix 2014-03-22 09:52:21 +01:00
Ruud
5a4467adb9 Only return movies from omdb 2014-03-21 21:40:38 +01:00
Ruud
f50852fee0 Get proper iTunes namespace. fix #2978 2014-03-21 18:52:11 +01:00
Ruud
1f647b3cc7 Autoload missing for notifications. fix #2980 2014-03-21 18:46:01 +01:00
Ruud
caf4eab104 Get correct size from HDBits api. fix #2997 2014-03-21 18:34:58 +01:00
Ruud
334078fc34 Allow password tag in returned release dict 2014-03-21 18:25:36 +01:00
Ruud
25b1d86c50 get identifier in awesomehd 2014-03-21 18:10:32 +01:00
Ruud
78e2ff4870 Cleanup 2014-03-21 18:01:52 +01:00
Ruud
ad94cce283 Make release files normal list 2014-03-21 17:35:26 +01:00
Ruud
b4610e5c23 Make sure to make release download id lowercase 2014-03-21 17:34:24 +01:00
Ruud
e12dcc2fb8 Also return releases on notify frontend 2014-03-21 16:37:35 +01:00
Ruud
a818276b6d Move multi-identifier search out of index 2014-03-21 16:32:31 +01:00
Ruud
269d779df7 Move for_media out of release index 2014-03-21 16:28:40 +01:00
Ruud
b63f7b7e5d Move with_status out of releases 2014-03-21 16:21:49 +01:00
Ruud
b4a3ac8081 Remove get_session 2014-03-21 16:20:25 +01:00
Ruud Burger
bbaaaa72fb Merge pull request #3000 from fuzeman/feature/dev_rtorrent
[rtorrent] Fixed connection bug when using SSL + Basic Auth
2014-03-21 14:50:50 +01:00
Ruud
89c83001ca Move status_get outside index 2014-03-21 14:49:44 +01:00
Dean Gardiner
61f1fdabd1 Removed print statement from rtorrent downloader 2014-03-22 02:46:05 +13:00
Dean Gardiner
28062eacb6 [rtorrent] cleaned up connection, '+https' is now added to 'httprpc' protocol if SSL option is enabled 2014-03-22 02:37:58 +13:00
Dean Gardiner
8bdbf8df2e Updated rtorrent-python library
- Fixed bug with basic auth on secure connections
 - Added 'test_connection' method to RTorrent class
 - Minor adjustment to authorization encoding
2014-03-22 02:37:57 +13:00
Ruud
27e4800ed2 try next release, use media_id 2014-03-21 14:31:49 +01:00
Ruud
37bc54e01e Add title to boxcar2 message
closes #2977
2014-03-21 13:32:36 +01:00
Ruud
6115f83a09 Update TPB proxies 2014-03-21 13:29:34 +01:00
Ruud
a8159c9e55 Make sure quality sizes are int on migrate 2014-03-21 13:23:52 +01:00
Ruud
f734e27d23 Make sure to safe size as int 2014-03-21 13:22:16 +01:00
Ruud
8a118df636 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-03-21 13:17:30 +01:00
Ruud Burger
a19b75760f Merge pull request #2987 from clinton-hall/dev-filesize
only compare filesize as int.
2014-03-21 13:17:20 +01:00
Ruud
1224b98745 Add releases to media.get.
re-use where possible
2014-03-21 13:15:35 +01:00
Ruud
6243ed3bd5 Add destination support to Synology downloader 2014-03-21 12:30:44 +01:00
Ruud
41e94e1e22 Make sure media dict has category key 2014-03-21 12:15:46 +01:00
Ruud
8c6940c351 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-03-21 11:06:41 +01:00
Ruud
384a2e0e15 Suggestions not showing 2014-03-21 11:06:25 +01:00
Joel Kåberg
a691841756 Merge pull request #2993 from fabcouwer/contribution_guide
Rewrite contributing.md
2014-03-20 22:55:49 +01:00
Friso Abcouwer
ff94bd6a90 Rewrite contributing.md 2014-03-20 20:17:13 +01:00
Ruud
00419910b4 Use getIdentifier in suggestions 2014-03-20 16:54:54 +01:00
Ruud
21c9d7fcc3 Use identifier helper 2014-03-20 16:53:27 +01:00
Ruud
e314c605f1 Missing identifier key 2014-03-19 23:28:12 +01:00
Ruud
8316b5cb29 Key identifiers missing 2014-03-19 23:24:16 +01:00
Ruud
be46ed12ac get identifier helper 2014-03-19 23:06:27 +01:00
Ruud
a2d22b6feb Set cleanup interval 2014-03-19 22:46:14 +01:00
Ruud
f4e373447e File not properly send to Sabnzbd 2014-03-19 22:37:44 +01:00
Ruud
b69f8b7ed5 Files not properly send to sabnzbd 2014-03-19 22:33:14 +01:00
Ruud
5b2dfffe0f Use correct year key 2014-03-19 22:09:27 +01:00
Ruud
b347f761a7 Ignore faulty category tables
They don't have any categories anyway, so might aswell ignore the error.
2014-03-19 18:13:55 +01:00
Ruud
445724573d Make sure movie is added with multi identifier 2014-03-19 18:02:45 +01:00
Ruud
8c5e0cf0a7 Make media index multi identifier based 2014-03-19 17:55:21 +01:00
Ruud
c6016a25df Destroy index and re-add on updated version 2014-03-19 09:22:51 +01:00
clinton-hall
5a0a5ad83b only compare filesize as int. 2014-03-19 15:45:20 +10:30
Ruud
3f0a0f552b Keep dict keys and only make array if all are ints in request params 2014-03-18 22:59:39 +01:00
Ruud
63fd35a95c Media helpers 2014-03-18 22:59:01 +01:00
jkaberg
db163e7bd1 remove debug stuff 2014-03-16 22:16:42 +01:00
jkaberg
f3cd569e77 fixed release_downloads, now properly returning data 2014-03-16 22:15:37 +01:00
jkaberg
a95671491d added missing vars to Torrent and File class 2014-03-16 22:13:59 +01:00
jkaberg
95295e47ab catch vars so we dont spam log 2014-03-16 21:50:11 +01:00
Ruud
a54e9ddd9c Merge branch 'refs/heads/nosql' into develop 2014-03-16 21:35:28 +01:00
Ruud
0e5f89d7d6 Use identifier to log already snatch quality 2014-03-16 21:33:49 +01:00
jkaberg
742d5cbfb3 qbittorrent downloader working 2014-03-16 21:33:45 +01:00
Ruud
e3fa695ad4 ThePirateBay don't overwrite search_url 2014-03-16 21:31:42 +01:00
Ruud
d6675f3311 Use correct super class 2014-03-16 21:18:31 +01:00
Ruud
64850a45da Merge branch 'refs/heads/nosql' into develop 2014-03-16 21:12:46 +01:00
Ruud
6aa0b7c748 Missing autoload 2014-03-16 21:04:23 +01:00
Ruud
fd80728857 Merge branch 'refs/heads/nosql' into develop
Conflicts:
	couchpotato/core/providers/torrent/yify/main.py
2014-03-16 20:59:31 +01:00
Ruud
9618a8d543 Position yeah in thumblist 2014-03-16 20:55:13 +01:00
Ruud
ec3a6e65ae Safer way to get reddit url 2014-03-16 18:27:58 +01:00
Ruud
d74578ec66 FreeBSD guide update 2014-03-16 17:54:57 +01:00
Ruud Burger
865dd24901 Merge pull request #2976 from MLWALK3R/develop
Proxy; Fixed a link with SSL on Yify
2014-03-16 16:54:35 +01:00
Ruud
e063028c7d Remove Yify ssl url 2014-03-16 16:54:21 +01:00
Ruud
db11c1b7a8 BinSearch: only check if there aren't enough parts 2014-03-16 16:53:15 +01:00
Michael
89d7a924fb Proxy; Fixed a link with SSL 2014-03-16 15:49:16 +00:00
jkaberg
fe16115b20 add qbittorrent client 2014-03-16 14:00:01 +01:00
Ruud
fbccba77a7 64Bit installer setup 2014-03-16 13:00:09 +01:00
jkaberg
07db34beb9 Merge remote-tracking branch 'origin/nosql' into nosql-qbitorrent 2014-03-16 12:38:12 +01:00
jkaberg
f42fb2fdd2 updated qbittorrent-python 2014-03-16 11:35:28 +01:00
Ruud
d3efda74b2 One up 2014-03-16 09:44:44 +01:00
Ruud
66b849cb29 Merge branch 'refs/heads/master' into desktop
Conflicts:
	version.py
2014-03-16 09:43:32 +01:00
Ruud
5853a373f3 Add library query 2014-03-16 09:20:53 +01:00
Ruud
951fbdccbd Don't load MovieBase twice 2014-03-16 00:46:56 +01:00
Ruud
c6d20eb91f Code cleanup 2014-03-16 00:06:39 +01:00
Ruud
b68cea3921 Getting release download didn't use correct key 2014-03-15 23:44:07 +01:00
Ruud
36125f1067 Make sure to use proper category id 2014-03-15 23:43:46 +01:00
Ruud
eee16c7a3d Newznab failed when doing manual download 2014-03-15 23:43:22 +01:00
Ruud
bf1d93f256 Test download connection failed 2014-03-15 23:42:58 +01:00
jkaberg
42e3c95f87 added qbittorrent-python 2014-03-15 17:05:26 +01:00
Ruud
ee702d92e6 Delete empty folders and leftover .pyc files on restart 2014-03-15 15:34:43 +01:00
Ruud
f5aae23111 Make sure log navigation doesn't overlay mask 2014-03-15 12:49:50 +01:00
Ruud
b19f98ef5b Merge branch 'refs/heads/develop' 2014-03-15 12:35:28 +01:00
Ruud
178f770b16 Use key to get release last_edit 2014-03-15 12:33:01 +01:00
Ruud
1b2d72531f Newznab url creation failed 2014-03-15 12:31:55 +01:00
Ruud
7e08454edd Import issues 2014-03-15 12:23:11 +01:00
Ruud
72d318323e Release ignore didn't get correct parameter 2014-03-15 12:17:53 +01:00
Ruud
b611a98bae Missing fireEvent import in torrentleech 2014-03-15 11:55:05 +01:00
Ruud
8b2eb50f29 Base classes for matcher and library 2014-03-15 11:51:45 +01:00
Ruud
988d0d6e35 Merge branch 'refs/heads/nosql2' into nosql 2014-03-15 10:17:45 +01:00
Ruud
48ec6fc757 Missing autoloads 2014-03-12 23:55:56 +01:00
Ruud
0921c5e160 Autoload suggestions 2014-03-12 23:44:24 +01:00
Ruud
83843ae210 Make sure in_library only contains relevant releases 2014-03-12 23:15:14 +01:00
Ruud
e5e768c56f Migrate hide profile 2014-03-12 22:46:12 +01:00
Ruud
f775c9da0b Only show log after successful load 2014-03-12 22:34:32 +01:00
Ruud
6a9c3dac77 Rename so it doesn't try to import itself 2014-03-12 22:30:10 +01:00
Ruud
11aaaecb7b Get colors back, remove logged string 2014-03-12 21:47:07 +01:00
Ruud
12c08154c5 Optimize imports 2014-03-12 21:41:29 +01:00
Ruud
79d6a6f85f Import fixes 2014-03-12 09:40:20 +01:00
Ruud
4513f03e8f Move movie to single file 2014-03-11 23:31:37 +01:00
Ruud
f3adfca9c5 Move media to single file 2014-03-11 23:01:42 +01:00
Ruud
0b61ec1e13 Move plugins to single file 2014-03-11 22:47:42 +01:00
Ruud
8492c9b214 Move notifications to single file 2014-03-11 22:31:47 +01:00
Ruud
2a60c52483 Move downloaders to single file 2014-03-11 22:28:56 +01:00
Ruud
917e813607 Move _base to single file 2014-03-11 22:15:58 +01:00
Ruud
c20f64685f Autoload from single file 2014-03-11 22:15:27 +01:00
Ruud
471229216a Provider restructure 2014-03-11 18:45:50 +01:00
Ruud
28661ab11a Move providers under media 2014-03-10 20:24:04 +01:00
Ruud
11c348a3d7 Merge branch 'refs/heads/develop' into nosql 2014-03-10 16:04:44 +01:00
Ruud
ffe6b7dd70 Add boxcar 2 support. closes #2886 2014-03-10 15:42:40 +01:00
Ruud
720af9085a Get new info when titles are missing 2014-03-10 14:30:16 +01:00
Ruud
8916ea5299 Don't unicode ints and floats 2014-03-10 14:12:19 +01:00
Ruud
bf9a43b3d1 Make sure name is string type, not bs4 class 2014-03-10 14:10:29 +01:00
Ruud
bca597c4e2 Speed up log view in Chrome 2014-03-10 00:23:41 +01:00
Ruud
33e2f63ed5 Don't try to use release info if it doesn't exist 2014-03-10 00:00:33 +01:00
Ruud
e3f6df7120 Convert sp to unicode 2014-03-09 23:47:33 +01:00
Ruud
58f198ddad Zero fill identifier when adding movie 2014-03-09 23:39:46 +01:00
Ruud
61edcfe4f3 Remove debug comment 2014-03-09 15:30:21 +01:00
Ruud
85bc3ddde6 Don't try to reuse generator 2014-03-09 15:29:53 +01:00
Ruud
488b631c38 Migrate release files 2014-03-09 14:32:41 +01:00
Ruud
da1b430200 Only update media when needed 2014-03-09 14:30:43 +01:00
Ruud
bef76f0118 Add download info to release when available 2014-03-09 14:30:23 +01:00
Ruud
dd6baa72fa Faster search title index 2014-03-09 14:29:54 +01:00
Ruud
519b832d8c Time migration 2014-03-09 11:52:15 +01:00
Ruud
131326675e Update release on rename
Prevent RevConflict
2014-03-09 00:24:54 +01:00
Ruud
73dfa232f9 Allow "or" for media and release status in movie.list 2014-03-08 23:14:16 +01:00
Ruud
3f64173905 Don't trigger toasts on init 2014-03-08 22:44:28 +01:00
Ruud
5cc4260d8e Remove unused method 2014-03-08 22:30:17 +01:00
Ruud
7f33a3847c Try existing images first 2014-03-08 22:16:19 +01:00
Ruud
c9be74ce80 Use correct keys when renaming media 2014-03-08 22:15:07 +01:00
Ruud
f5d29eafe0 Extend title helper 2014-03-08 22:14:54 +01:00
Ruud
274e2c1cc2 Use proper download_info dict 2014-03-08 20:22:35 +01:00
Ruud
c2233f7474 Merge branch 'refs/heads/develop' into nosql 2014-03-08 18:55:55 +01:00
Ruud
75f22f44a1 Reference before assigned 2014-03-08 18:55:38 +01:00
Ruud
d60a8a71b7 Check if file has moved, ignore copystat errors. close #2936 2014-03-08 18:18:06 +01:00
Ruud
f8bfd6fd3f Coming soon thumbnail height fix 2014-03-08 16:50:53 +01:00
Ruud
f2bc735bc0 Merge branch 'refs/heads/develop' into nosql
Conflicts:
	couchpotato/core/plugins/dashboard/main.py
2014-03-08 14:07:34 +01:00
Ruud
9e471ac389 Add "I Just Watched" Reddit to userscripts. fix #2621 2014-03-08 14:05:49 +01:00
Ruud
ca34cbd180 Check for year in coming soon 2014-03-08 12:34:42 +01:00
Ruud
9f4ea662da Use natural sorting
Conflicts:
	couchpotato/core/helpers/request.py
2014-03-08 12:30:47 +01:00
Ruud
3172a4d030 Check for year in coming soon 2014-03-08 12:27:06 +01:00
Ruud
c58315e2ee Use natural sorting 2014-03-08 11:59:12 +01:00
Ruud
dc0ea5b3f6 Use proper sorting 2014-03-08 11:52:59 +01:00
Ruud
b50cf1cf4c Only allow next year for couldbereleased check 2014-03-08 10:50:11 +01:00
Ruud
a7a4499dd4 Thumbnail check 2014-03-08 10:31:59 +01:00
Ruud
f78261ee32 Merge branch 'refs/heads/develop' into nosql 2014-03-08 10:02:42 +01:00
Ruud
b69898d624 Remove double self in filetime check. fixes #2952 2014-03-08 09:37:39 +01:00
Ruud
cfa8702654 Close sqlite connection 2014-03-07 20:42:08 +01:00
Ruud
dd9c65db4c Migrate properties 2014-03-07 20:19:52 +01:00
Ruud
9e7e29f03f Merge branch 'refs/heads/develop' into nosql 2014-03-07 19:14:02 +01:00
Ruud
2066625bf0 Don't use ctime on unix system. Cleanup check a bit. close #2904 2014-03-07 18:58:27 +01:00
Ruud
7af1d00ea2 Allow passwords inside nzb name 2014-03-07 18:10:17 +01:00
Ruud
5b279a48cb Make sure q is first for nzbclub 2014-03-07 17:38:40 +01:00
Ruud
3d6e84e11c Migrate logging 2014-03-07 16:05:52 +01:00
Ruud
527d6ab7ff Properly offset media listing 2014-03-07 15:44:09 +01:00
Ruud
8a295c72ba Only add single image file 2014-03-07 15:43:05 +01:00
Ruud
d2b82a37b2 Only migrate file if it is found 2014-03-07 15:42:06 +01:00
Ruud
ec4e680d62 Always check and remove files on movie info 2014-03-07 15:41:08 +01:00
Ruud
aa80ed3d4b Don't log big object 2014-03-07 15:33:12 +01:00
Ruud
1d4a7894e8 Fix broken files on refresh 2014-03-07 12:39:58 +01:00
Ruud
2d28cb6897 Backup databases 2014-03-07 11:18:14 +01:00
Ruud
89e561c991 Use time to index notifications 2014-03-06 22:39:11 +01:00
Ruud
d16dd7c75d Use proper sorting 2014-03-06 21:30:59 +01:00
Ruud
e51e6b2171 Use correct release status
Remove continue from category migrate
2014-03-05 16:43:35 +01:00
Ruud
3eaf5c9bc0 Use status from release when available 2014-03-05 16:27:55 +01:00
Ruud
4625c920c3 Stop notify frontend triggering for media add 2014-03-05 16:27:27 +01:00
Ruud
2ce0f7beb4 Migrate update 2014-03-05 16:27:08 +01:00
Ruud
81ba95f540 Migrate library file info 2014-03-05 15:41:16 +01:00
Ruud
b25e0ea393 Remove file_type references 2014-03-05 15:24:33 +01:00
Ruud
2632b34438 Use proper default id 2014-03-05 15:24:19 +01:00
Ruud
ff60013335 Only use filename for cache api call 2014-03-05 15:24:08 +01:00
Ruud
a0bcb03dde Index fixes 2014-03-05 15:12:00 +01:00
Ruud
f0044a9342 Use correct index for movie_type 2014-03-05 11:43:50 +01:00
Ruud
04fb81e071 More import 2014-03-04 22:42:03 +01:00
Ruud
642b665418 Merge branch 'refs/heads/develop' into nosql 2014-03-04 20:34:32 +01:00
Ruud
a5fa0681ed Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-03-03 23:35:06 +01:00
Ruud
22e922e860 Split socket error to nr 2014-03-03 23:34:49 +01:00
Ruud Burger
0126f5ae84 Merge pull request #2921 from MLWALK3R/patch-1
replaced a duplicate URL on TPB
2014-03-03 23:24:45 +01:00
Ruud
c389790cf2 Merge branch 'refs/heads/develop' 2014-03-03 22:19:29 +01:00
Ruud
cfb246fa84 Make sure imdb rating exists before using it 2014-03-03 22:11:59 +01:00
Michael Walker
651119b7dd replaced a duplicate URL
replaced a duplicate URL.
2014-02-28 17:35:08 +00:00
Ruud
d7445dfa80 Merge branch 'refs/heads/develop' 2014-02-26 14:00:56 +01:00
Ruud Burger
f944a70a9c Merge pull request #2911 from fuzeman/feature/dev_rtorrent
[rtorrent] Fixed naming issue
2014-02-26 13:46:52 +01:00
Dean Gardiner
9056f5ae59 Fixed naming issue in rtorrent downloader 2014-02-26 14:52:10 +13:00
Ruud
ed62c981cc Add quality tests 2014-02-25 22:04:43 +01:00
Ruud
36782768a4 Merge branch 'refs/heads/develop' 2014-02-25 21:37:29 +01:00
Ruud Burger
2a7ba28903 Merge pull request #2902 from MLWALK3R/develop
SSL'd and Updated
2014-02-25 21:35:45 +01:00
Ruud Burger
e8ec2ef8d1 Merge pull request #2906 from tehspede/develop
Search url has method defined twice (2 and 3) we only want 3.
2014-02-25 21:31:29 +01:00
Ruud
2c9d487614 Update build url 2014-02-25 21:20:59 +01:00
tehspede
864e8654c3 Search url has method defined twice (2 and 3) we only want 3. 2014-02-25 18:29:51 +02:00
Michael
e11453aafb SSL'd and Updated
Add SSL to some URL's and update the Apple RSS link.
2014-02-24 21:59:45 +00:00
Ruud
c1596f098c Merge branch 'refs/heads/develop' into nosql 2014-02-24 22:54:04 +01:00
Ruud
e57620f67c Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-02-24 22:49:09 +01:00
Ruud
e481763967 Merge branch 'refs/heads/mikke89-downloaders_test' into develop 2014-02-24 22:48:19 +01:00
Ruud
2b3d755c64 Cleanup downloader testbuttons PR 2014-02-24 22:48:12 +01:00
Ruud
fc2db36820 Merge branch 'downloaders_test' of git://github.com/mikke89/CouchPotatoServer into mikke89-downloaders_test
Conflicts:
	couchpotato/core/downloaders/rtorrent/main.py
2014-02-24 22:07:49 +01:00
Ruud Burger
4fdea782f3 Merge pull request #2901 from MLWALK3R/patch-5
Torrentshack description https
2014-02-24 21:57:03 +01:00
Michael Walker
188a1a3b03 HTTP to HTTPS
Updated URL to SSL, better account security.
2014-02-24 20:56:01 +00:00
Ruud Burger
8e2014f2d4 Merge pull request #2899 from MLWALK3R/patch-4
ILoveTorrents use SSL
2014-02-24 21:48:36 +01:00
Michael Walker
fb95d7923f HTTP to HTTPS
Updated URL's to SSL, better account security.
2014-02-24 20:46:47 +00:00
Ruud Burger
fe5ca69f36 Merge pull request #2897 from MLWALK3R/patch-2
Replaced proxies for TPB
2014-02-24 21:41:31 +01:00
Ruud Burger
1086b808dc Merge pull request #2898 from MLWALK3R/patch-3
Changed http to https
2014-02-24 21:32:32 +01:00
Michael Walker
0050e5cdfc Changed http to https
adjusted http to SSL, better security when dealing with logins.
2014-02-24 18:23:18 +00:00
Michael Walker
6b357674d0 Replaced proxies
Remove dead/blocked proxies, Added in new unblocked/working links
2014-02-24 18:18:39 +00:00
Ruud
8f44dfcde5 Merge branch 'refs/heads/develop' into nosql
Conflicts:
	couchpotato/core/providers/torrent/sceneaccess/main.py
2014-02-24 18:49:07 +01:00
Ruud Burger
82c0592e49 Merge pull request #2875 from fuzeman/feature/dev_rtorrent
[rtorrent] Fixed how torrent status is determined
2014-02-24 18:48:01 +01:00
Ruud
28ab4576d5 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-02-24 18:47:08 +01:00
Ruud Burger
2debd5598f Merge pull request #2888 from xombiemp/ptp-golden
PTP Golden release fix
2014-02-24 18:46:53 +01:00
Ruud
d86d44e2d4 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-02-24 18:46:24 +01:00
Ruud Burger
3d85460dc8 Merge pull request #2887 from koppelbakje/develop
[SceneAccess] Change search method to 3 (description)
2014-02-24 18:46:15 +01:00
Ruud
52ce85fbf2 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-02-24 18:45:27 +01:00
Ruud
6d70533e0b Yifi proxy changes 2014-02-24 18:45:19 +01:00
Ruud Burger
4d8338e829 Merge pull request #2878 from fuzeman/feature/rtorrent/httprpc
[rtorrent] HTTP-RPC support
2014-02-24 18:44:12 +01:00
Andrew Parker
a4e48e1f6b I've found that the score applied for Golden torrents is not enough to snatch them reliably. When I set the Prefer Golden setting, I expect it to always choose the Golden release over a Scene release. Here's an excerpt from my log that illustrates this setting failing to grab the Golden release over a Scene release:
02-22 13:56:17 INFO [core.media.movie.searcher] Search for Thor: The Dark World in 720P
02-22 13:56:21 INFO [otato.core.providers.base] Found correct release with weight 1.00, old_score(4581) now scaled to score(4581)
02-22 13:56:21 INFO [otato.core.providers.base] Found: score(4581) on PassThePopcorn: Thor The Dark World (2013) - 720p Blu-ray x264 Scene (720p)
02-22 13:56:21 INFO [otato.core.providers.base] Found correct release with weight 1.00, old_score(1257) now scaled to score(1257)
02-22 13:56:21 INFO [otato.core.providers.base] Found: score(1257) on PassThePopcorn: Thor The Dark World (2013) - 720p Blu-ray x264 HQ With Commentary (720p)
02-22 13:56:21 INFO [core.media._base.searcher] Wrong: Required word missing: thor the dark world 2013 720p web h 264 extras 720p
02-22 13:56:24 INFO [tato.core.plugins.release] Snatched "Thor The Dark World (2013) - 720p Blu-ray x264 Scene (720p)": Thor: The Dark World (2013) in 720P

With this modification it will fix this specific example and hopefully all others.
2014-02-22 15:35:08 -07:00
Leon Koppel
790a74f9e4 Change search method to 3 (description) 2014-02-22 18:43:19 +01:00
mikke89
893dde9958 rTorrent connection test: Error message on version check fail 2014-02-21 20:28:49 +01:00
Dean Gardiner
d448b8cd99 Adjusted rtorrent connect method to work with httprpc URIs, adjusted option descriptions 2014-02-21 15:48:17 +13:00
Dean Gardiner
ca2c4a0b3e Updated rtorrent-python library (HTTP-RPC support)
- Added URI transforming to cleanly support HTTP-RPC
2014-02-21 15:27:35 +13:00
mikke89
499b8193ab Added return message text to frontend 2014-02-21 02:26:04 +01:00
mikke89
1f18d2b09c Test downloader connection: Check version of uTorrent and Sabnzbd 2014-02-21 02:09:16 +01:00
Dean Gardiner
a92d6fd35c Fixed how the status is determined in the rtorrent downloader 2014-02-21 01:13:12 +13:00
Ruud
12adde8f80 Use new id for pushbullet. fix #2864 2014-02-17 20:41:03 +01:00
Ruud
6437079be3 Merge branch 'refs/heads/develop' into nosql
Conflicts:
	couchpotato/core/media/__init__.py
	couchpotato/core/notifications/trakt/main.py
2014-02-16 15:56:18 +01:00
Ruud
8b747dff9b Use correct var name in nzbvortex 2014-02-16 15:48:55 +01:00
Ruud
027ff43dfd Path encode files in rename. fix #2846 2014-02-16 14:55:35 +01:00
Ruud
f50c8504cf Encode before copy metadata. fix #2832 2014-02-16 14:19:15 +01:00
Ruud
30f5a3944c Use test url for trakt notification test. fix #2798 2014-02-16 14:11:21 +01:00
Ruud
a1c0b000a4 Update TMDB api 2014-02-16 10:48:48 +01:00
Ruud
f22778aacb Use proper check 2014-02-16 10:40:54 +01:00
Ruud
888ee07f65 Check responsecodes 2014-02-16 10:27:31 +01:00
Ruud Burger
aa5937c278 Merge pull request #2824 from fuzeman/feature/dev_rtorrent
[rtorrent] Fixed bug where setting changes would not take effect
2014-02-16 10:13:02 +01:00
Ruud
4831c80598 Update nzbclub url 2014-02-16 09:59:37 +01:00
Ruud
b9a724c8bb Merge branch 'refs/heads/develop' 2014-02-16 09:43:03 +01:00
Ruud
886a271d19 Use correct ordering for request arrays. fix #2810 2014-02-16 09:42:47 +01:00
Ruud
68d826ca1c Merge branch 'refs/heads/develop' 2014-02-15 19:48:07 +01:00
Ruud
8dfb0d1d5c Fire events after tab add 2014-02-15 19:47:55 +01:00
Ruud
d6921882e1 Merge branch 'refs/heads/develop' 2014-02-14 19:39:47 +01:00
Ruud
1f982b7999 Migration start 2014-02-14 19:36:23 +01:00
Ruud
eb1556f3e8 Add filter on database manage page 2014-02-12 21:14:06 +01:00
Ruud
061b79eac0 Title 2014-02-12 08:25:17 +01:00
Ruud
3bbeec513a Events 2014-02-12 08:24:15 +01:00
Ruud
a6be59bbea Delete documents 2014-02-12 08:16:10 +01:00
Ruud
96e8a909d8 database manage init 2014-02-11 22:19:55 +01:00
Ruud
8724076601 Remove sqlalchemy and elixir 2014-02-10 23:41:54 +01:00
Ruud
4b356aba3e Nosql 2014-02-10 23:22:11 +01:00
Ruud
a13e0a75e8 Remove test 2014-02-10 23:08:14 +01:00
Ruud
ecf91d616b nosql 2014-02-10 23:07:42 +01:00
Ruud
0d4d0f3126 scandir 2014-02-09 18:29:17 +01:00
Ruud
b9a8ca14c3 nosql 2014-02-09 18:21:47 +01:00
Ruud
f7e1a2a5eb nosql 2014-02-09 15:57:08 +01:00
Ruud
c3bc9c8591 Nosql 2014-02-08 17:58:45 +01:00
Dean Gardiner
3380e20e3a Cleaned up naming of functions in rtorrent downloader 2014-02-08 03:25:11 +13:00
Dean Gardiner
a2c87e1b7d Fixed bug where changes to rtorrent settings wouldn't take effect until a restart 2014-02-08 03:22:59 +13:00
Ruud
a609b401c4 nosql 2014-02-07 15:11:06 +01:00
Ruud Burger
9098e44513 Merge pull request #2823 from ramon86/develop
Category changes for Torrent provider TorrentBytes
2014-02-07 12:26:26 +01:00
Ramon van Dam
62524e01e1 * Added category 'bd50' (BR-Disk) to Torrent provider TorrentBytes
* Changed category identifier for category 'brrip' for Torrent provider TorrentBytes (see issue #2795)
2014-02-07 12:08:17 +01:00
Ruud Burger
78bf1d274e Merge pull request #2817 from fuzeman/feature/dev_rtorrent
[rtorrent] Fixed bug which caused large torrents to fail
2014-02-06 14:04:46 +01:00
Dean Gardiner
461e469f28 Updated rtorrent-python library
- Fixed bencode encoding bug with long types
2014-02-07 01:40:11 +13:00
Ruud
99252074be More nosql 2014-02-02 20:41:14 +01:00
Ruud Burger
e4e7ae3621 Merge pull request #2775 from ressu/fix_rtorrent_connection
Fix rTorrent connectivity
2014-01-31 13:31:32 -08:00
Ruud
63743dd2b6 More NoSQL 2014-01-31 00:38:37 +01:00
Ruud
a254886bad Try NoSQL 2014-01-29 17:49:54 +01:00
Ruud
aab10fb599 Close all 2014-01-28 08:22:06 +01:00
Ruud
9d55ecffe9 Add log var 2014-01-27 21:58:48 +01:00
Ruud
00b613d2e0 Scoped session 2014-01-27 21:56:47 +01:00
Ruud
fe24322f7c Add kwargs to to_dict 2014-01-27 21:56:27 +01:00
mikke89
660e20dada Merge branch 'downloaders_test' into downloaders_test_dev
Conflicts:
	couchpotato/core/downloaders/transmission/main.py
2014-01-26 18:37:18 +01:00
mikke89
18c8e803a4 Fixed 'connection test' for Transmission and Sabnzbd 2014-01-26 18:34:42 +01:00
Sami Haahtinen
15a19949b8 Fix rTorrent connectivity
The combination of cleanHost and rTorrent.connect issues caused rTorrent
connections to fail. This update fixes cleanHost() so that it can
actually cope with SSL based hosts and finishes the migration to
cleanHost() in connect()

Conflicts:
	couchpotato/core/helpers/variable.py
2014-01-26 19:26:15 +02:00
mikke89
ebc5a66375 Fixed 'connection test' for Transmission and Sabnzbd 2014-01-26 18:17:23 +01:00
Ruud
1120b4ab51 Remove Elixir library
Update SQLAlchemy
2014-01-26 16:29:16 +01:00
Ruud
f91081e39c uTorrent hostname hint 2014-01-26 10:52:50 +01:00
Ruud
9e991e1595 Fix Yify proxy check 2014-01-26 10:40:15 +01:00
Ruud
afac06081c Defer settings dom injection 2014-01-26 10:05:05 +01:00
Ruud
b773228719 Merge branch 'refs/heads/commit_rollback' into develop 2014-01-26 09:18:52 +01:00
Ruud
7001ed476d Wrap all commits with try/except 2014-01-26 00:33:21 +01:00
Ruud
31c39650a9 Force default title when none match 2014-01-25 15:26:35 +01:00
Ruud
fbae706b0f Use correct var to shuffle 2014-01-25 15:26:00 +01:00
Ruud
88c328af8e Improved manage scanning
Expire after db get
2014-01-24 22:33:22 +01:00
Ruud
cbd8981ee2 Use helper 2014-01-24 16:33:10 +01:00
Ruud
3101926e9b removeDuplicate helper 2014-01-24 15:42:29 +01:00
Ruud
c9e0910c55 Can't use len() on filter iterator. fix #2762 2014-01-24 15:29:24 +01:00
Ruud
d65667ce16 Don't force add basic auth to url 2014-01-24 14:50:54 +01:00
Ruud
7d7251862c Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-23 22:00:40 +01:00
Ruud
4d02a969c2 Merge branch 'refs/heads/georgewhewell-hdbits-api' into develop 2014-01-23 22:00:17 +01:00
Ruud
e20c776364 Use urlopen for HD Bits requests 2014-01-23 22:00:12 +01:00
Ruud
c55404699e Merge branch 'hdbits-api' of git://github.com/georgewhewell/CouchPotatoServer into georgewhewell-hdbits-api 2014-01-23 21:50:31 +01:00
Ruud Burger
6240e4eba0 Merge pull request #2756 from fuzeman/feature/dev_rtorrent
Increased rTorrent load_torrent max waiting time
2014-01-23 12:48:41 -08:00
Ruud
cf86719607 Encode before logging 2014-01-23 00:08:38 +01:00
Ruud
76943b6529 Make sure imdb list_id regex matches whole string.
Thanks @basrieter
2014-01-23 00:00:59 +01:00
Ruud
ca8bbdc293 Allow longer imdb user_id parse 2014-01-22 23:59:25 +01:00
Ruud
8e6f12a897 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-22 23:39:13 +01:00
Ruud
52c64c1a6a Get full imdb (watch)list without login. fix #2715 2014-01-22 23:38:18 +01:00
Ruud
ca94d48f8b No need to try and cach htmldata 2014-01-22 23:31:45 +01:00
Dean Gardiner
d860680823 Increased rTorrent load_torrent max waiting time to 10 retries/seconds 2014-01-22 22:27:19 +13:00
Dean Gardiner
d1dbf3745a Updated rtorrent-python library 2014-01-22 22:24:22 +13:00
Ruud Burger
4b1151bda1 Put future import after shebang 2014-01-22 08:48:22 +01:00
Ruud
18c64e493b Don't cache post requests 2014-01-21 23:06:02 +01:00
Ruud
fc6839b441 Force remove duplicate in suggested movies 2014-01-21 22:38:36 +01:00
Ruud
405b63acdd Remove unused CP automation provider 2014-01-21 21:46:03 +01:00
Ruud
f3dee50448 Properly handle and trigger events 2014-01-21 21:29:54 +01:00
Ruud
04e550ebe7 Merge branch 'refs/heads/ressu-fix_log_lines' into develop 2014-01-21 20:29:11 +01:00
Ruud
05b58819d6 Merge branch 'fix_log_lines' of git://github.com/ressu/CouchPotatoServer into ressu-fix_log_lines 2014-01-21 20:27:32 +01:00
georgewhewell
63c72853f4 Change HDBits provider to use API instead of scraping site 2014-01-21 12:07:38 +00:00
mikke89
f20cce0176 Small fix 2014-01-21 01:38:37 +01:00
mikke89
723cbcd8bd Added 'test connection' button for downloaders 2014-01-21 01:30:13 +01:00
mikke89
dfbb84caae Small fix deluge 2014-01-21 01:12:34 +01:00
mikke89
009d6cafaf Added connection test to the rest of downloaders 2014-01-21 00:24:36 +01:00
Ruud
bd9a4289d1 Rename importlib 2014-01-21 00:19:26 +01:00
Ruud
29a34fef8c py3k port helpers 2014-01-20 23:58:54 +01:00
Ruud
08e2a3a883 Import print function 2014-01-20 23:28:48 +01:00
Ruud
2d37022525 Relative import 2014-01-20 23:28:35 +01:00
Ruud
bb3faaf2cd Exception cleanup 2014-01-20 23:27:58 +01:00
Ruud
2c43b9a926 Update six 2014-01-20 23:23:40 +01:00
mikke89
964ed5f497 Added test connection button for uTorrent 2014-01-20 22:09:03 +01:00
Ruud
b47a94852a Update library: tornado 2014-01-20 16:55:58 +01:00
Ruud
f318524070 Update library: html5lib 2014-01-20 16:50:21 +01:00
Ruud
04539edb45 Update library: APScheduler 2014-01-20 16:47:49 +01:00
Ruud
5cf21452c1 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-19 19:18:39 +01:00
Ruud
799299c7cc Code cleanup 2014-01-19 19:15:58 +01:00
Ruud Burger
458330d325 Merge pull request #2743 from fuzeman/feature/dev_rtorrent
Fixed bug in rTorrent downloader when file paths start with '/'
2014-01-19 03:03:31 -08:00
Ruud Burger
973bec9e6a Merge pull request #2741 from fuzeman/develop_iptorrents
Fixed IPTorrents provider searching (again)
2014-01-19 03:01:54 -08:00
Dean Gardiner
1f941a5105 Ensure files returned from rTorrent are absolute and inside the torrent directory. 2014-01-19 23:05:05 +13:00
Sami Haahtinen
8217fecb33 Make Log messages pasteable 2014-01-19 12:01:35 +02:00
Dean Gardiner
1dda7edf1c Fixed bug when parsing torrents page in IPT provider 2014-01-19 22:45:37 +13:00
Ruud
2cfff73486 Merge branch 'refs/heads/develop' 2014-01-18 19:54:32 +01:00
Ruud Burger
3c03e400f0 Merge pull request #2732 from mano3m/develop_fixhost
Store username and pass in cleanhost
2014-01-18 10:52:02 -08:00
mano3m
6388d97c5c Store username and pass in cleanhost
Fixes #2727
2014-01-18 12:39:59 +01:00
Ruud
0c7dda8d44 Merge branch 'refs/heads/develop' 2014-01-17 23:17:41 +01:00
Ruud
161e3086fa Force year as int on tmdb info. fix #2725 2014-01-17 23:00:15 +01:00
Ruud
b3f1f938be Speedup automation getinfo 2014-01-17 22:38:38 +01:00
Ruud
082da6e3a6 Don't return .text in urlopen 2014-01-17 22:38:02 +01:00
Ruud
d9b9447242 Change cachekey if info not extended 2014-01-17 22:37:01 +01:00
Ruud
dbaa377770 version.master 2014-01-17 16:29:29 +01:00
Ruud
47d2b81d1c Merge branch 'refs/heads/develop' 2014-01-17 16:28:59 +01:00
Ruud
d743282578 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-16 11:48:59 +01:00
Ruud
7eee6f0b96 Set proper branch in version file 2014-01-16 11:48:49 +01:00
Ruud Burger
dda3fca4b4 Merge pull request #2718 from techmunk/deluge_improvements
Deluge: Ignore empty torrent results, select only what is needed
2014-01-15 23:54:39 -08:00
Techmunk
8648b2f948 Only request needed properties from deluge, and fix error when CP asks for torrent hash that is not in deluge. i.e. missing. 2014-01-16 17:13:30 +10:00
Ruud
f52cbd24f8 Remove debug variable 2014-01-15 22:30:00 +01:00
Ruud
5ea13eeffd Catch xbmc turned off error 2014-01-15 21:51:27 +01:00
Ruud
6cc802952f Catch maxretry error
Don't fill logs with duplicate logs
2014-01-15 21:38:00 +01:00
Ruud
190b9db645 Merge branch 'refs/heads/mano3m-develop_cleanhost' into develop 2014-01-15 21:10:14 +01:00
Ruud
81949b9cad Remove prints and actually save deletion 2014-01-15 21:10:06 +01:00
Ruud
894e419f40 Allow config delete 2014-01-15 21:08:19 +01:00
Ruud
cdc6c036aa Merge branch 'develop_cleanhost' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_cleanhost 2014-01-15 19:31:45 +01:00
Ruud Burger
1e9168f682 Merge pull request #2712 from fuzeman/develop_fix_blackhole
Fixed encoding bug with blackhole downloader
2014-01-15 10:29:53 -08:00
Ruud
790415dd4f Log version at start. fix #2708 2014-01-15 14:25:12 +01:00
Ruud
679e0ea2c3 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-15 12:16:38 +01:00
Ruud Burger
bd167403c3 Merge pull request #2711 from fuzeman/develop_iptorrents
Fixed IPTorrents provider searching
2014-01-15 01:14:03 -08:00
Dean Gardiner
13abe62bed Fixed encoding bug that caused the blackhole downloader to fail 2014-01-15 22:01:35 +13:00
Dean Gardiner
4147c5b870 Fixed issue retrieving seeders and leechers which caused searching to fail on IPT 2014-01-15 20:52:27 +13:00
Ruud
37d4755aae Log when there is an actual problem with the filedata download. fix #2705 2014-01-14 15:59:51 +01:00
Ruud
a9f416c4c5 Variable cleanup 2014-01-14 12:06:47 +01:00
Ruud
8a11f246b1 Add group to untag release 2014-01-14 09:31:29 +01:00
Ruud
8d44577dca Update movie info getter with better exception handling 2014-01-13 23:43:10 +01:00
mano3m
72457d8d10 Log with system encoding 2014-01-13 23:15:10 +01:00
mano3m
3bb44f8d9f Migrate rTorrent options 2014-01-13 23:14:18 +01:00
Ruud
279297b8fa Log as debug for file overwrite 2014-01-13 22:30:41 +01:00
Ruud
c71e661daf Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-13 21:54:08 +01:00
Ruud
f8820c06fe Normcase in folder compare 2014-01-13 21:53:38 +01:00
Ruud
907b40e3c6 Higher z-index for userscript popup. fix #2703 2014-01-13 16:27:08 +01:00
Ruud
d318e163bb Custom tag was never defined 2014-01-12 21:28:49 +01:00
Ruud
6e9c36a503 Lowercase compare 2014-01-12 20:31:59 +01:00
Ruud
c9e9fe86aa Don't normcase in sp function 2014-01-12 20:25:45 +01:00
Ruud
c4f4e2b524 Split identifier by know tag if possible 2014-01-12 17:42:10 +01:00
Ruud
95246b90f6 Merge branch 'refs/heads/mano3m-develop_newznab' into develop 2014-01-12 17:10:52 +01:00
Ruud
2fad29df51 Style custom tag input
Add description to abr
2014-01-12 17:10:30 +01:00
Ruud
a95320e162 Merge branch 'develop_newznab' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_newznab 2014-01-12 15:18:41 +01:00
Ruud
31b8805b5e Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-12 15:18:34 +01:00
Ruud
9e69d4e153 Queue multiple media refresh 2014-01-12 15:18:13 +01:00
Ruud Burger
aa5ecd7b42 Merge pull request #2687 from mano3m/develop_log_metadata
Log overwriting of metadata files
2014-01-12 00:21:47 -08:00
Ruud
15f90aa503 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2014-01-12 09:17:40 +01:00
Ruud Burger
ec86bc4a38 Merge pull request #2688 from mano3m/develop_inctran
Add incomplete folder support to Transmission
2014-01-12 00:16:16 -08:00
Ruud Burger
a3efc64901 Merge pull request #2690 from mano3m/develop_renamer
Abort rename when something fails
2014-01-12 00:13:57 -08:00
Ruud Burger
c929ecbac0 Merge pull request #2691 from mano3m/develop_bluray
Fix Bluray.com encoding issue
2014-01-12 00:12:22 -08:00
mano3m
cc32e49060 Fix Bluray.com encoding issue 2014-01-12 01:07:35 +01:00
mano3m
05c41460c2 Downloader cleanHost
Extend the use of clean host (add more checks and features) and make the settings more dummy proof.
2014-01-12 00:52:32 +01:00
mano3m
794efaa209 Abort rename when something fails
And tag the folder with failed_rename so that the release with not be
deleted later on.
2014-01-12 00:41:54 +01:00
mano3m
b0e93ee18c Add custom_tag field to newznab 2014-01-12 00:34:49 +01:00
mano3m
0393b51db6 Add logging 2014-01-11 23:59:16 +01:00
mano3m
464c8ad71c Log overwriting of metadata files
Gives more info for cases like #2641
2014-01-11 23:36:23 +01:00
mano3m
9df0e01874 Add incomplete folder support to Transmission 2014-01-11 23:33:23 +01:00
Ruud
bf2beb2530 Don't fire async event inside an already async event 2014-01-11 20:46:01 +01:00
Ruud
f0b096d41a Don't show empty title on re-add 2014-01-11 20:45:37 +01:00
Ruud
c948f38469 Only add trailer to known quality list. fix #2684 2014-01-11 14:23:35 +01:00
Ruud
516cbd73bd Catch timeout errors when xbmc isn't available 2014-01-11 13:41:41 +01:00
Ruud Burger
680ae53cf4 Merge pull request #2682 from techmunk/deluge_improvements
Only request needed torrent ids from deluge.
2014-01-11 04:06:32 -08:00
Techmunk
99b99a992d Only request needed torrent ids from deluge. 2014-01-11 15:46:24 +10:00
Ruud
fb9d52c2b9 Don't search for movies with year to far in the future 2014-01-11 00:26:59 +01:00
Ruud
5cc471cc87 Remove path on fail 2014-01-11 00:05:24 +01:00
Ruud
07c7171fbb Image download wasn't working anymore 2014-01-11 00:05:02 +01:00
Ruud
c15dd2dec9 Disable verify for now 2014-01-10 23:17:04 +01:00
Ruud
a408cc0246 Update renamer to not trigger twice
Keep track of status support on releases
2014-01-10 22:54:23 +01:00
Ruud
c2568432e7 Use requests lib for openurl 2014-01-10 14:04:16 +01:00
Ruud
91f3cda995 Update requests lib 2014-01-10 13:16:12 +01:00
Ruud
28aa908513 Add category_id to api docs 2014-01-08 00:08:23 +01:00
Ruud
5e24b11c21 Don't continue with bitsoup if table isn't found. fix #2633 2014-01-06 22:36:51 +01:00
Ruud
4cdf71513f Clean tags from beginning of string. fix #2654 2014-01-06 22:24:34 +01:00
Ruud
7e6d9c02f6 Add quality test name. closes #2664 2014-01-06 21:53:29 +01:00
Ruud
afc4f73e36 Don't try wait when not between time is given 2014-01-05 23:46:42 +01:00
Ruud
5ef0c52277 Create reusable url opener 2014-01-05 22:17:16 +01:00
Ruud
c23b014cff Set default timeout 2014-01-05 22:02:39 +01:00
Ruud
f13cddfb26 Don't return empty actor roles 2014-01-05 18:55:51 +01:00
Ruud
623f6f3ed0 Limit title and actor search for tmdb 2014-01-05 18:07:06 +01:00
Ruud
a158716c8b Move actor images to dict 2014-01-05 17:57:15 +01:00
Ruud
9df7f7b22c Speed up userscript info getter by removing actor info 2014-01-05 13:10:27 +01:00
Ruud
8e5c24282e Disable themoviedb in search 2013-12-31 13:12:34 +01:00
Ruud
266429311b Update Tornado 2013-12-30 23:27:40 +01:00
Ruud
d74342adee Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-12-30 00:34:33 +01:00
Ruud
4408d99524 Typo 2013-12-30 00:33:41 +01:00
Ruud Burger
0168b9cbea Merge pull request #2642 from mano3m/develop_renamer
Fix 100% CPU bugs
2013-12-29 15:06:05 -08:00
mano3m
e69421226b Remove leading '//' from *NIX paths
Fixes #2506,  #2021
2013-12-29 23:42:55 +01:00
mano3m
f08d34b816 Add a trailing separator for windows drive path
Fixes  #2581, #2526
2013-12-29 23:25:53 +01:00
Ruud Burger
4a36c3b6a8 Merge pull request #2631 from mano3m/develop_try_next
Download fixed
2013-12-27 11:18:07 -08:00
mano3m
be0b708d32 Add user-agent to newznab request
Fixes #2611

Note that urlib2.urlopen should just follow redirects so I dont
understand why we need 3b519aeac9
2013-12-27 20:11:27 +01:00
mano3m
1cea50bcfb Added logging 2013-12-27 19:34:53 +01:00
mano3m
55483cf736 Consider try_next as failed 2013-12-27 19:09:39 +01:00
Joel Kåberg
16f8a1159f Merge pull request #2624 from mano3m/develop_fix
Complete nzbget https
2013-12-23 14:22:50 -08:00
mano3m
d4d03a846e Complete nzbget https
Fixes what went broken :(
2013-12-23 23:08:26 +01:00
Ruud Burger
7bccc46583 Merge pull request #2623 from mano3m/develop_https
Add https functionality for nzbget
2013-12-23 12:35:21 -08:00
mano3m
dc61e9916f Add https functionality for nzbget
Fixes #2622
2013-12-23 15:39:45 +01:00
Joel Kåberg
cf2b5f72ae Revert "Added delete files button, #2596 (manuall merge)"
This reverts commit 0b01bbc52e.
2013-12-21 13:29:02 +01:00
Joel Kåberg
fe397caafc better score forumla for seeding/leeching 2013-12-20 02:08:20 +01:00
Joel Kåberg
787405ae62 Updated YIFY provider to use proxies and magnet links, #2560 (manuall merge) 2013-12-19 22:14:29 +01:00
Joel Kåberg
0b01bbc52e Added delete files button, #2596 (manuall merge) 2013-12-19 22:12:22 +01:00
Joel Kåberg
190e1d2c4f Revert "Merge pull request #2596 from WoLpH/linked_file_delete"
This reverts commit a24d4a9e3b, reversing
changes made to b468048d95.
2013-12-19 22:09:35 +01:00
Joel Kåberg
8a822e35e2 Revert "Merge pull request #2560 from coolius/master"
This reverts commit 64a196f21d, reversing
changes made to a24d4a9e3b.
2013-12-19 22:08:54 +01:00
Joel Kåberg
64a196f21d Merge pull request #2560 from coolius/master
Updated YIFY provider to use proxies and magnet links
2013-12-19 12:58:29 -08:00
Joel Kåberg
a24d4a9e3b Merge pull request #2596 from WoLpH/linked_file_delete
Added delete files button
2013-12-19 12:56:45 -08:00
Joel Kåberg
dafa70b7e3 fix seed/lech score formula, fix #2605 2013-12-19 21:41:17 +01:00
Joel Kåberg
32b9bc3345 Merge pull request #2612 from RuudBurger/manual_scan
Manual scan
2013-12-19 10:30:41 -08:00
Joel Kåberg
a7b8f992d3 Merge pull request #2614 from mano3m/develop_stalled
Don't consider stalled as failed when seeding
2013-12-19 10:30:23 -08:00
Joel Kåberg
0c66b8067e Merge pull request #2607 from mano3m/develop_no_ren
Mark release as downloaded if renamer is disabled.
2013-12-19 10:30:12 -08:00
mano3m
7b3645ea7c Don't consider stalled as failed when seeding
Fixes the issue where Transmission is seeding but still considering the
torrent stalled (new functionality of Transmission). CPS marks it as
failed and a perfectly good torrent gets deleted. Several people on the
forum have this issue,
2013-12-17 21:41:26 +01:00
mano3m
69569758d9 Make sure we return true on success 2013-12-16 22:51:04 +01:00
mano3m
55777531d5 Clean-up and dont mark status twice 2013-12-16 22:43:05 +01:00
Joel Kåberg
99ce8dacbf added api calls for manual scan (kudos to @mano3m) 2013-12-16 17:07:34 +01:00
coolius
138a3b1f3c Replaced default YIFY URL with official alternate domain "yify-torrents.im" 2013-12-16 09:21:59 +00:00
Joel Kåberg
d49c663c64 Merge branches 'develop' and 'manual_scan' of https://github.com/RuudBurger/CouchPotatoServer into manual_scan
Conflicts:
	couchpotato/core/plugins/renamer/main.py
2013-12-16 07:29:31 +01:00
mano3m
e9a457e263 mark release das downloaded if renamer is disabled.
if the renamer is not enabled and the quality of the downloaded release
is not the finish quality, the release did not get a status update.
2013-12-15 21:03:40 +01:00
Joel Kåberg
3b0e07100f Merge pull request #2545 from mano3m/develop_downloaders
Downloader and renamer improvements
2013-12-14 14:20:33 -08:00
mano3m
74561500b5 Convert windows path to *nix path in sp
Fixes #2594

Note that os.path.normath converts '/' to '\\' on windows machines, but
unfortunately not the other way around...
2013-12-14 21:12:10 +01:00
Ruud
3b519aeac9 nzbmegasearch returns redirected url. fix #2597 2013-12-12 19:58:55 +01:00
WoLpH
9a55961786 Added delete files button 2013-12-12 02:27:09 +01:00
mano3m
ea5d274f4d Add another check 2013-12-11 22:40:01 +01:00
mano3m
f57f2444fe Improved checking
Fixes #2539 ?
2013-12-11 22:11:33 +01:00
mano3m
fd768df9e5 Tabs to spaces 2013-12-08 17:35:05 +01:00
Joel Kåberg
4db68e4887 update contributing 2013-12-08 13:07:17 +01:00
mano3m
6d4297a5fb Extend os.path.sep to all folder checks
Expands 50c5044fe8
2013-12-07 22:39:47 +01:00
mano3m
ab413f2f3e Dont remove historic data when doing a full scan.
Fixes #2572

Note that the dashboard already takes care of this and does it the right
way (keeping seeding and ignored releases).
2013-12-07 22:33:34 +01:00
mano3m
574255c4b6 Don't tag .ignore files 2013-12-07 22:33:34 +01:00
mano3m
008ba39856 Add backwards compatibility for the renamer API 2013-12-07 22:33:33 +01:00
mano3m
cff1b3abdb Provide IDs to check to all downloaders 2013-12-07 22:33:32 +01:00
mano3m
231c5b8ca1 Renamer rename to media 2013-12-07 22:33:31 +01:00
mano3m
640664494e Increase check_snatched readability
- Reduce rested if statements
- Add more comments
2013-12-07 22:31:16 +01:00
mano3m
951b7b8425 Update Synology and Pneumatic
As per black hole improvement
2013-12-07 22:31:16 +01:00
mano3m
c9980539f0 Improve black hole support
Also scan the 'from' folder if Black hole is used together with another
downloader.
2013-12-07 22:31:15 +01:00
Ruud Burger
7eb802b42a Merge pull request #2501 from mano3m/develop_xbmc
XBMC metadata update
2013-12-07 13:17:03 -08:00
Ruud Burger
2f4f3ce0fe Merge pull request #2578 from mano3m/develop_fnmatch
Fix fnmatch
2013-12-07 13:14:13 -08:00
mano3m
824ac86d18 Fix fnmatch
fnmatch does not accept regular expressions as presumed in
0c4851e436 See
http://docs.python.org/2/library/fnmatch.html

This patch actually completely broke tagging. All we need to do is make
sure any [ or ] used is conbverted into [[] or []].

Fixes #2557 and  #2362
2013-12-07 22:11:16 +01:00
mano3m
4553726423 [Notifications][XBMC] Add always do a full scan option to XBMC
Fixes #2498 (at least partially)
2013-12-07 15:09:30 +01:00
mano3m
f0bde7316d [Metadata][XBMC] Update new actors to actor_roles 2013-12-07 15:09:23 +01:00
coolius
4eaddadf8c Removed unusable proxy 2013-12-04 13:50:06 +00:00
coolius
9dd98b29be Added proxy options to YIFY provider 2013-12-03 10:32:10 +00:00
coolius
732946d38a Updated YIFY provider to use proxy list 2013-12-03 10:08:14 +00:00
Ruud
966f8c36b1 Make sure to use a valid cookie_secret. fix #2553 2013-12-02 12:09:14 +01:00
coolius
ca070e67e7 Updated YIFY provider to use proxy and magnet links 2013-12-02 10:53:47 +00:00
Joel Kåberg
b468048d95 directory properly removed 2013-12-01 21:37:07 +01:00
Ruud
50c5044fe8 Add path separator for check 2013-12-01 19:23:53 +01:00
Ruud
46b2d6ba6e movie_id > media_id 2013-11-30 16:48:46 +01:00
Ruud
8aec5cf605 Better (custom) formhints 2013-11-30 14:59:52 +01:00
Ruud
54af80d5ad Don't wait for shutdown of scheduler 2013-11-30 12:51:35 +01:00
Ruud
8b2cd62211 Don't save stash on pull 2013-11-30 12:49:28 +01:00
Ruud
2fc4809821 Variable renaming movie to media 2013-11-30 12:41:06 +01:00
Ruud
bde6de1789 Move movie listing to media 2013-11-30 12:23:53 +01:00
Ruud
029ae20573 Use Object.each for object looping 2013-11-30 11:55:22 +01:00
Ruud
c72cca4ea2 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-11-30 11:52:56 +01:00
Ruud
0f071be762 Use Object.each for object looping 2013-11-30 11:52:41 +01:00
Ruud
fdcddaaffc Merge branch 'refs/heads/develop' 2013-11-30 11:27:41 +01:00
Joel Kåberg
cddf47f113 move long subtitle text into formhint 2013-11-29 22:28:51 +01:00
Joel Kåberg
76f3f5253a move long automation text into formhint 2013-11-29 22:14:28 +01:00
Joel Kåberg
d833a04293 move long texts into formhint 2013-11-29 22:10:02 +01:00
Joel Kåberg
2e96860380 directory properly removed 2013-11-27 07:58:23 +01:00
Ruud Burger
3e2e6385cf Properly split seed ratios and seed times 2013-11-26 17:10:56 +01:00
Joel Kåberg
ccc2028690 remove directory option in utorrent
doesn't behave as expected on windows
2013-11-26 15:51:12 +01:00
Joel Kåberg
81dbc1ca79 Merge pull request #2527 from RuudBurger/couchtart
TorrentPotato ready for prime time
2013-11-25 23:54:45 -08:00
Ruud
e9a3059be2 Allow longer description in formhint 2013-11-25 22:16:02 +01:00
Ruud
3d5b33856f Add some quality tests 2013-11-24 22:45:17 +01:00
Ruud
8d2e3a1919 Add ratio and seed time styling 2013-11-24 21:43:54 +01:00
Joel Kåberg
f3380c4fed seed_time and seed_ratio 2013-11-24 19:33:29 +01:00
Joel Kåberg
8a58d7f973 use hostname instead of TorrentPotato (dashboard) 2013-11-24 14:51:03 +01:00
Ruud
37b98cb835 TorrentPotato styling of inputs 2013-11-24 00:52:51 +01:00
Ruud
50262112b8 Use release_name 2013-11-24 00:27:47 +01:00
Ruud
4b9f9862fc Change name and response 2013-11-23 12:07:00 +01:00
Ruud
df60d70592 Move it 2013-11-23 12:06:46 +01:00
mano3m
1b5bc1fa05 [Metadata][XBMC] Add fileinfo to nfo
Also fixed a int / int = int divide bug
2013-11-23 01:04:41 +01:00
mano3m
e4993eac24 [Metadata][XBMC] Add actors to CPS info and nfo 2013-11-23 01:04:40 +01:00
mano3m
bd1bb1ee91 [Metadata][XBMC] Add images to nfo 2013-11-23 01:04:40 +01:00
mano3m
2c1c57333c [Metadata][XBMC] Add trailer to nfo 2013-11-23 01:04:39 +01:00
mano3m
a466cbcf16 [Metadata][XBMC] Fix nfo data
Fixes #1412 and @Lennong MPAA section
2013-11-23 01:04:38 +01:00
Ruud
379f62a339 CouchTater fixes 2013-11-23 00:31:26 +01:00
Ruud
eaf2974f8d Better frontend notification and GUI updating 2013-11-22 23:00:33 +01:00
Ruud
99e641a30d Update dashboard when the search ends of added new movie 2013-11-22 16:47:55 +01:00
Ruud
88d6148500 Update libs 2013-11-22 16:09:15 +01:00
Ruud
f53364eb6c Update Tornado 2013-11-22 16:08:54 +01:00
Ruud
b8f78e311d Update scheduler module 2013-11-22 15:38:33 +01:00
Ruud
bb6e1e2909 Don't propagate core messages to other notification providers. 2013-11-22 15:17:35 +01:00
Ruud
c62c6664ce Merge branch 'refs/heads/fuzeman-feature/notifications/pushbullet' into develop 2013-11-22 01:44:41 +01:00
Ruud
8ae4e3be18 Merge branch 'feature/notifications/pushbullet' of git://github.com/fuzeman/CouchPotatoServer into fuzeman-feature/notifications/pushbullet 2013-11-22 01:44:16 +01:00
Ruud
0065ff5086 Indentation cleanup 2013-11-22 01:34:50 +01:00
Ruud
28d073f934 Merge branch 'refs/heads/Damiya-fix2474' into develop 2013-11-22 01:30:10 +01:00
Ruud
df1cb0ae08 Merge branch 'fix2474' of git://github.com/Damiya/CouchPotatoServer into Damiya-fix2474 2013-11-22 01:29:57 +01:00
jchristi
31a1af43d5 Update fedora init file
This took me awhile to figure out when trying to install for the first time. Luckily, I had the sickbeard init file to reference.
2013-11-22 01:28:14 +01:00
Joel Kåberg
8951e9fc90 typo 2013-11-21 22:22:19 +01:00
Joel Kåberg
357166414c use .get() and added more options 2013-11-21 22:20:45 +01:00
Joel Kåberg
e1a311de40 initial couchtarter provider (torrent newznab)
initial ground work based on newznab provider
needs UI changes: http://i.imgur.com/4MiJUH5.png (need to add ratio and
seed hours also)

untested code
2013-11-21 19:55:36 +01:00
Kate von Roeder
ab923cc592 Sort directories so that we scan them in alphabetical order as well (keeps things nice and well ordered!) 2013-11-20 18:47:09 -08:00
Kate von Roeder
99947fb135 CSS fix for #1578 part 2 - Change text direction from RTL to LTR, fixing issue where root drives would show up as '\C:'. Weird! 2013-11-20 13:47:40 -08:00
Kate von Roeder
185cb0196a Fix for #1578 - Depends on stableSort, so added to PR#2500.
Object.each is not necessarily alphabetic when iterating an object's properties, so we pull the folders out of the object, add them to an array, and sort that.
2013-11-20 13:36:08 -08:00
Kate von Roeder
309ec50691 Array.sortBy should also use the new stablesort. 2013-11-20 09:15:25 -08:00
Kate von Roeder
f865484182 Add Array.stableSort from mootools forge.
Change calls to Array.sort to use new Array.stableSort. Fixes sorting problems on Chrome
2013-11-20 05:47:36 -08:00
Dean Gardiner
ed19fd0254 Added Pushbullet notifications 2013-11-20 22:04:11 +13:00
Ruud
cec88319fe Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-11-19 23:45:28 +01:00
Ruud
d31b7eb72d Add date and message-id to email notification 2013-11-19 23:45:12 +01:00
Joel Kåberg
b7d93b84dd option to set download directory in utorrent 2013-11-19 18:59:54 +01:00
Joel Kåberg
4008774908 append label unnecessary
just set the full path to the dir
2013-11-19 18:51:28 +01:00
Ruud
b4275639f5 Merge branch 'refs/heads/develop' 2013-11-19 09:17:24 +01:00
Ruud
accce789ba Normalize path sp function 2013-11-19 09:16:47 +01:00
Ruud
091b1fefd2 Add category_id to movie add docs 2013-11-19 09:09:29 +01:00
Ruud
899b1f9b96 Add mobile web capable for Android
Thanks @Elziah
2013-11-18 23:03:40 +01:00
Ruud
0ce5c51c67 renamer.scan needs some files. fix #2481 2013-11-18 22:56:03 +01:00
Ruud
f79fcda27f Small one up 2013-11-17 21:22:24 +01:00
Ruud
cdbcad2238 Merge branch 'refs/heads/develop' into desktop 2013-11-17 21:20:30 +01:00
Ruud
d6709469f6 Merge branch 'refs/heads/develop' 2013-11-17 21:20:13 +01:00
Ruud
da760db340 Merge branch 'refs/heads/mano3m-develop_fixes' into develop 2013-11-17 21:18:37 +01:00
Ruud
4242a5cedb Merge branch 'develop_fixes' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_fixes 2013-11-17 21:16:35 +01:00
mano3m
8c41046836 more fixes 2013-11-17 21:10:07 +01:00
Ruud
5d913e87c3 One up! 2013-11-17 20:20:18 +01:00
Ruud
16f02bda27 Merge branch 'refs/heads/develop' into desktop 2013-11-17 20:03:22 +01:00
Ruud
3e43e3fc4c Merge branch 'refs/heads/develop' 2013-11-17 20:02:20 +01:00
mano3m
c5e6ce0e48 Several sp fixes 2013-11-17 18:26:01 +01:00
Ruud
3ad527eb62 Allow 1080p in webrip quality 2013-11-17 00:24:09 +01:00
Ruud
e622e68701 Merge branch 'refs/heads/develop' 2013-11-17 00:07:01 +01:00
Ruud
af2a6bf031 Force ETA data not to be to far in the future 2013-11-16 23:58:58 +01:00
Ruud
731419b61f Better error logging for syno downloader. close #2464 2013-11-16 23:33:23 +01:00
Ruud
0fafd83d76 Do some scoring with scene / nuked. fix #2009 2013-11-16 23:26:46 +01:00
Ruud
003b78a66e Scene validation 2013-11-16 23:24:05 +01:00
Ruud
4ade857f01 Better string regex between brackets 2013-11-16 23:23:10 +01:00
Ruud
a90a4d1bc2 Merge branch 'refs/heads/develop' 2013-11-16 17:24:09 +01:00
Ruud
658596659f Deluge wrong sp wrap. fix #2463 2013-11-16 17:23:51 +01:00
Ruud
165676407a Merge branch 'refs/heads/develop' 2013-11-16 14:39:58 +01:00
Ruud
59e6d68416 Use correct config name for bithdtv 2013-11-16 14:39:51 +01:00
Ruud
e6d76db250 Merge branch 'refs/heads/mano3m-develop_scan_basefolder' into manual_scan 2013-11-16 14:29:54 +01:00
Ruud
3b3288c53d Manual scan folder cleanup 2013-11-16 14:29:34 +01:00
Ruud
16cf220741 Merge branch 'develop_scan_basefolder' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_scan_basefolder
Conflicts:
	couchpotato/core/plugins/renamer/main.py
2013-11-16 13:45:06 +01:00
Ruud
5131cb0ae1 Merge branch 'refs/heads/develop' 2013-11-16 13:32:37 +01:00
Ruud
db4f7a216a SP function wrapping whole variables 2013-11-16 13:32:00 +01:00
Ruud
3f8b97feb9 Merge branch 'refs/heads/mano3m-develop_clean_path' into develop 2013-11-16 12:57:35 +01:00
Ruud
a27673eaa4 Merge branch 'develop_clean_path' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_clean_path 2013-11-16 12:57:21 +01:00
Ruud
8e3291a1b0 bithdtv, Import correct functions 2013-11-16 12:49:39 +01:00
Ruud
89c04902e8 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-11-16 12:47:57 +01:00
Ruud
e29b100374 Don't try to unicode None object 2013-11-16 12:47:18 +01:00
Shatil Rafiullah
941d4414ce Changed to using getattr() so films lacking sets/collections are also handled. 2013-11-16 12:43:07 +01:00
Shatil Rafiullah
dc830324ae Added XBMC collection (set) categorization capability. 2013-11-16 12:43:02 +01:00
Ruud
3f37fc1e11 Move proxy getter to global torrent provider 2013-11-16 12:39:43 +01:00
Ruud
3442129610 Merge branch 'refs/heads/cptjhmiller-develop' into develop 2013-11-16 12:16:42 +01:00
Ruud
e9d29f10c1 Cleanup KAT import 2013-11-16 12:16:37 +01:00
Joel Kåberg
8996dd34c2 fix ident in bithdtv 2013-11-16 12:01:29 +01:00
Ruud
e2c5be0fcd Merge branch 'develop' of git://github.com/cptjhmiller/CouchPotatoServer into cptjhmiller-develop 2013-11-16 11:57:27 +01:00
Ruud
3d42c55560 Merge branch 'refs/heads/techmunk-develop' into develop 2013-11-16 11:56:35 +01:00
Ruud
9d287f140b Reorder deluge import 2013-11-16 11:56:29 +01:00
Jamie
5a8f28764d Fix to help find working proxy 2013-11-16 02:30:31 +00:00
Joel Kåberg
a2c5074d66 fixed bithdtv provider 2013-11-16 02:58:29 +01:00
Joel Kåberg
6acc125d4f bithdtv provider
thanks to @lansinghd ,
https://github.com/RuudBurger/CouchPotatoServer/pull/2460
2013-11-16 02:57:06 +01:00
Techmunk
7b9ebc2f34 Fixed issue https://github.com/RuudBurger/CouchPotatoServer/issues/2440, by returning a 'True' status when an existing torrent in deluge is added from CP. 2013-11-15 21:25:19 +10:00
Ruud
4e0d6ec980 Merge branch 'refs/heads/clinton-hall-develop' into develop 2013-11-14 22:36:17 +01:00
Ruud
c1944c987d Add some more double char replacements 2013-11-14 22:35:13 +01:00
Ruud
cdb889a985 Merge branch 'develop' of git://github.com/clinton-hall/CouchPotatoServer into clinton-hall-develop 2013-11-14 21:56:31 +01:00
Jamie
f6281c6dcc Update __init__.py 2013-11-14 14:51:27 +00:00
Jamie
c832a9e2b2 Added proxy support 2013-11-14 14:50:50 +00:00
Ruud
0c4851e436 Escape filename before using it in a regex. fixes #2430 2013-11-13 19:32:59 +01:00
Ruud
ce1b205993 Allow 720p tag for screener 2013-11-13 19:22:19 +01:00
Clinton Hall
b771aa303f replace multiple separators. fixes #2448 2013-11-13 21:41:29 +10:30
Ruud Burger
81178b4c8b Merge pull request #2438 from fuzeman/feature/dev_rtorrent
rTorrent: Delete Torrent Directories
2013-11-10 08:28:29 -08:00
Dean Gardiner
0317681597 Added directory removal to the rtorrent downloader 2013-11-11 03:21:00 +13:00
Ruud
ddba0e318f Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-11-09 23:42:15 +01:00
Ruud
3ef9591abd Cleanup import 2013-11-09 23:42:08 +01:00
Joel Kåberg
0d3c0c4077 use delete_files 2013-11-09 22:42:09 +01:00
Ruud
22b32364b6 Missing ) 2013-11-09 21:41:03 +01:00
Ruud Burger
db8fd20d67 Merge pull request #2436 from jkaberg/develop
rTorrent: remove only files, not folder
2013-11-09 09:46:15 -08:00
Joel Kåberg
3c061095e9 remove only files, not folder
(or in worst case the download clients root folder and anything in it)
2013-11-09 18:33:21 +01:00
Ruud
05853bca89 Don't put plot over trailer z-index 2013-11-06 22:16:02 +01:00
Ruud
aa489bb709 Force name as string 2013-11-05 23:42:44 +01:00
Ruud
0b70465578 Add Flickcharts to userscript 2013-11-05 23:35:47 +01:00
Ruud
5c64ba3c9e Add box office top10 to IMDB automation. closes #2427 2013-11-05 22:45:25 +01:00
Ruud
e119020016 Ignore releases without any info. 2013-11-05 22:16:26 +01:00
Ruud
9b92a3d396 Make sure the ignored files get used. fix #2425 2013-11-05 21:24:47 +01:00
Ruud
c73dc10aeb Add a bit of padding to plot 2013-11-04 22:47:20 +01:00
Ruud
c5ee0a576e Merge branch 'refs/heads/jerbob92-suggestdescription' into develop 2013-11-04 22:26:14 +01:00
Ruud
3e2ede585a Animate plot to show more text 2013-11-04 22:26:08 +01:00
Ruud
ba3dd263ac Merge branch 'suggestdescription' of git://github.com/jerbob92/CouchPotatoServer into jerbob92-suggestdescription
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
2013-11-03 21:59:39 +01:00
Ruud
7c955ecc80 XMPP notification support
thanks @wernight
2013-11-03 17:17:59 +01:00
Ruud Burger
48193b38c5 Merge pull request #2415 from mano3m/develop_fix_scanner
Cleanup file size code in scanner
2013-11-03 07:44:30 -08:00
Ruud Burger
2f5a233e63 Merge pull request #2416 from mano3m/develop_remote
Default movie_folder to from folder
2013-11-02 11:56:55 -07:00
mano3m
7b86fe5587 Default movie_folder to from folder
In case remote downloaders return a path that does not exist locally,
the movie_folder and files are updated to the from folder. Fixes #2412,
#1762, #1667, #1047
2013-11-02 11:20:34 +01:00
mano3m
5396343940 Cleanup file size code in scanner 2013-11-02 10:43:22 +01:00
mano3m
fa1baa73e8 Introduce path cleaning
A new function sp is introduced. It does the same as ss but also cleans
the path.
2013-11-02 10:15:50 +01:00
mano3m
d984f11cbf First attempt at creating a working directory selector 2013-11-02 08:48:52 +01:00
mano3m
ae666bd9b6 Add API call to scan a folder for multiple movies 2013-11-02 08:48:52 +01:00
Ruud
d023eb8f1f Wrong variable logged in email notification 2013-10-30 23:10:02 +01:00
Ruud
9fa62de6dd Wrong variable logged in email notification 2013-10-30 23:09:45 +01:00
Adrien RAFFIN
7c5748ac87 Add support for starttls and allow modification of SMTP server port 2013-10-30 23:06:49 +01:00
Ruud Burger
d6fa5c97db Merge pull request #2387 from restanrm/master
Add support for StartTLS and allow modification of SMTP server port
2013-10-30 15:05:34 -07:00
Ruud
47de84259d Cleanup searcher PR 2013-10-30 22:51:26 +01:00
Ruud
f2b483b16e Merge branch 'refs/heads/fuzeman-dev_searcher' into develop 2013-10-30 22:09:10 +01:00
Ruud
98efe89833 Merge branch 'dev_searcher' of git://github.com/fuzeman/CouchPotatoServer into fuzeman-dev_searcher 2013-10-30 22:08:59 +01:00
Ruud
f8872e2803 Use getter to prevent keyerror. fix #2410 2013-10-30 22:04:51 +01:00
Ruud
3717443e85 Merge branch 'refs/heads/develop' 2013-10-30 21:39:50 +01:00
Ruud
a1fd581bca Add HD quality tags 2013-10-29 21:31:02 +01:00
Ruud
6a4bc1eb08 Don't add tags twice for dvd-r quality 2013-10-29 21:16:32 +01:00
Ruud
94d1f99315 Add ignored group 2013-10-29 21:14:53 +01:00
Ruud
7c51bdbdaf Allow par3 files in binsearch validation 2013-10-27 20:21:02 +01:00
Ruud
d275dfd8cc Add br2dvd as DVD alternative. fix #1604 2013-10-27 20:16:03 +01:00
Ruud
82b879fbb4 Add proper detail url for OMGWTF 2013-10-27 19:50:26 +01:00
Ruud
cc32bd7050 OMGWTF https url 2013-10-27 19:22:58 +01:00
Ruud
4f4ba470e0 Prevent files keyerror for release_download files. fix #2392 2013-10-26 15:26:19 +02:00
Ruud
ce47429701 Only show n/a if undefined 2013-10-26 15:12:54 +02:00
Ruud
550051b3f6 Use order for quality allow calculation. fix #2396 2013-10-26 15:09:30 +02:00
Adrien RAFFIN
a1ba39b3d3 Add support for starttls and allow modification of SMTP server port 2013-10-23 10:35:32 +02:00
Ruud
b4ad7b459f Merge branch 'refs/heads/develop' 2013-10-22 14:17:45 +02:00
Ruud
b149528406 Cleanup older releases calling the wrong function 2013-10-22 14:11:13 +02:00
Ruud
22c257618d Remove unused movie.search function 2013-10-21 00:00:13 +02:00
Ruud
e1c3c334d9 Use new provider named events for search. fix #2379 2013-10-20 23:56:31 +02:00
Ruud
c5e7159952 Don't add identifier score double when scoring 2013-10-20 23:40:16 +02:00
Ruud
fe8946e3b5 Cache qualities.all 2013-10-20 23:29:36 +02:00
Ruud
c354d3c6d5 Guess qualities based on score. fix #2373 2013-10-20 22:47:18 +02:00
Ruud
53cd907db1 Code cleanup 2013-10-20 17:43:30 +02:00
Ruud
605f340be5 Merge branch 'refs/heads/mano3m-develop_torrent_files' into develop 2013-10-20 17:39:36 +02:00
Ruud
e014ce7a47 Merge branch 'develop_torrent_files' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_torrent_files 2013-10-20 17:39:13 +02:00
mano3m
579c1fa53c Fix categories error 2013-10-20 13:24:01 +02:00
mano3m
4bfb5c6397 Make sure Transmission folders are 'normpath'-ed 2013-10-20 02:41:48 +02:00
mano3m
639d635913 Implement better folder checking
Fixes #2360, thanks @clinton_hall
2013-10-20 02:41:48 +02:00
mano3m
37e5f2c48b Fix SabNZBd folder bug
If only one file is extracted the storage key contains the extracted
file instead of the folder. This leads to CPS skipping the renamer. This
check fixes that.
2013-10-20 01:50:06 +02:00
mano3m
583bb1d0d9 Fix debug message 2013-10-19 00:14:04 +02:00
Jeroen Bobbeldijk
d0cffb5863 Fix up tabs 2013-10-18 23:35:20 +02:00
Jeroen Bobbeldijk
548686ebfe Added pilot to suggestion 2013-10-18 23:32:59 +02:00
Ruud
0635c571e4 Remove Notifo 2013-10-18 17:57:44 +02:00
Ruud
5af8fd0b21 Merge branch 'refs/heads/develop' 2013-10-18 17:29:10 +02:00
Ruud
4764925ae6 Only skip data dir paths when updating source 2013-10-18 17:13:06 +02:00
mano3m
80e9831c03 Make uTorrent language independent
Fixes #2341
2013-10-18 00:48:42 +02:00
Dean Gardiner
f7e1fa1406 'release.download' renamed to 'release.manual_download', Moved 'searcher.download' and 'searcher.try_download_result' to 'release.*'. 2013-10-17 23:27:24 +13:00
Dean Gardiner
dc73e5c58f Added back migration code in 'searcher.download' 2013-10-17 22:53:44 +13:00
mano3m
526d383929 Fix for release.update
The done release has no release info. This is fixed by doing it in the
same way as the interface.
2013-10-16 22:17:22 +02:00
mano3m
89f7cfb896 tagging fixes 2013-10-16 22:17:21 +02:00
mano3m
6abc4cc549 Upgrade tagging
Havent tested this yet, but it should work with both one filed torrents
and folders. Everything mixed, let's go crazy!!
2013-10-16 22:17:21 +02:00
mano3m
6aa7cfc0fe Wrong use of "is" 2013-10-16 22:17:20 +02:00
mano3m
345d0b8211 Add status to renamer.scan api call
This allows for scripts to send the seeding status with the scan
2013-10-16 22:17:20 +02:00
mano3m
eb17afc368 Fixed bug where it didnt do anything... 2013-10-16 22:17:19 +02:00
mano3m
c12b189f5f Fixed variables in scanner 2013-10-16 22:17:19 +02:00
mano3m
5edc745727 Typo 2013-10-16 22:17:18 +02:00
mano3m
bc877df513 Cleanup variable naming
Use release_download variable for all item/status/download_info
variables (which are by now all the same thing)
2013-10-16 22:17:18 +02:00
mano3m
57cb22c9aa Fix type of torrent_files 2013-10-16 22:17:18 +02:00
mano3m
719aca88b7 Clean-up read only files uTorrent 2013-10-16 22:17:17 +02:00
rbfblk
b1e66478f0 Fixing an issue which strips all read bits from utorrent downloaded files on Linux 2013-10-16 22:17:17 +02:00
Dean Gardiner
25f0462c15 Added files for rTorrent 2013-10-16 22:17:16 +02:00
mano3m
caded0694c include files for Transmission 2013-10-16 22:17:16 +02:00
mano3m
39190495be Correct path for one file torrent 2013-10-16 22:17:15 +02:00
Techmunk
1cc998bc95 Include files for renamer in Deluge downloader. 2013-10-16 22:17:15 +02:00
mano3m
54c7aad57a Include files from downloader in renamer 2013-10-16 22:17:14 +02:00
Dean Gardiner
1c8fed5457 Minor cleanup to Searcher and Matcher
Conflicts:

	couchpotato/core/plugins/matcher/main.py
2013-10-16 15:46:46 +13:00
Dean Gardiner
8e51513ee0 Moved 'searcher.create_releases' from Searcher to Release.
Conflicts:

	couchpotato/core/media/_base/searcher/main.py
	couchpotato/core/media/show/searcher/main.py
2013-10-16 15:46:24 +13:00
Dean Gardiner
1788440a5c Cleaned up usage of helper functions
Conflicts:

	couchpotato/core/media/show/searcher/main.py
	couchpotato/core/plugins/matcher/main.py
2013-10-16 15:40:25 +13:00
Dean Gardiner
f467e4d75a Fix to Provider getCatId when returning the cet_backup_id 2013-10-16 15:38:41 +13:00
Dean Gardiner
1e3f8410c0 Added 'searcher.get_media_searcher_id' event, Cleaned up some 'status.get' calls, Renamed some references of 'nzb' to 'rel'.
Conflicts:

	couchpotato/core/media/_base/searcher/main.py
2013-10-16 15:37:52 +13:00
Dean Gardiner
cbb7b96391 'searcher.correct_release' can now return a float indicating the weight/accuracy which is used to scale the score. Fix to IPT _buildUrl method.
Conflicts:

	couchpotato/core/providers/torrent/iptorrents/main.py
2013-10-16 15:34:08 +13:00
Dean Gardiner
5f24338bd2 Renamed 'movie' -> 'media' in 'searcher.download'
Conflicts:

	couchpotato/core/media/_base/searcher/main.py
	couchpotato/core/plugins/release/main.py
2013-10-16 15:32:25 +13:00
Dean Gardiner
56f049cd7d Created 'searcher.try_download_result' event from section in MovieSearcher.single 2013-10-16 15:04:10 +13:00
Ruud Burger
a09e8b63ae Merge pull request #2350 from einartryggvi/develop
Make ubuntu init script executable so it can be symlinked to /etc/init.d
2013-10-14 13:29:53 -07:00
Einar Tryggvi Leifsson
400643cbcd Make ubuntu init script executable so it can be symlinked to /etc/init.d 2013-10-14 20:27:21 +00:00
Ruud
ce68a37441 Zero fill imdb ids found 2013-10-14 22:24:23 +02:00
Ruud
1377b6315c Allow imdb id with int of 4-7 2013-10-14 22:05:32 +02:00
Ruud
83e7a8d765 Merge branch 'refs/heads/develop' 2013-10-14 21:57:19 +02:00
Ruud
0e18dcb8a1 Use success when adding movies 2013-10-14 21:13:31 +02:00
Ruud
7277ef3bd8 Remove SceneHD as we can't login with captcha. fix #2146 2013-10-14 21:07:37 +02:00
Ruud
4bdd4eab64 Merge branch 'refs/heads/develop' 2013-10-14 00:02:30 +02:00
Ruud
5bf3b929a2 Detect Windows 8 tablets as touchdevice also. 2013-10-14 00:01:38 +02:00
Ruud
66967f8326 Whatever! #2283
@clinton ;)
2013-10-13 22:37:15 +02:00
Ruud
e9abf982fe Flixter decode json before parsing. closes #2305 2013-10-13 22:21:32 +02:00
Ruud
3535f44db9 No need to use disable check in automation 2013-10-13 22:12:27 +02:00
Ruud
c772758683 Add category to renamer replacements. fix #2283 2013-10-13 22:12:15 +02:00
Ruud
2fc097c0e8 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-10-13 21:47:21 +02:00
Ruud
c9d7418899 Force unicode name for newznab. fix #2347 2013-10-13 21:46:16 +02:00
Ruud Burger
1317a4c6b7 Merge pull request #2346 from mano3m/develop_fix_dashboard
Move and fix cleanreleases
2013-10-13 12:17:49 -07:00
mano3m
4b0a5bdd9b Move and fix cleanreleases 2013-10-13 16:53:45 +02:00
Ruud
2b57bdcd03 Revert "Make sure to untag downloading dir if it's completed. fix #2341"
This reverts commit 65f039e9ed.
2013-10-13 15:17:39 +02:00
Ruud
65f039e9ed Make sure to untag downloading dir if it's completed. fix #2341 2013-10-13 14:25:50 +02:00
Ruud
3be6389fbf Use json in flixter 2013-10-13 14:16:59 +02:00
Ruud
9bf01e3a0b Plex endless loop when no clients connected 2013-10-13 14:01:18 +02:00
Ruud
1305327564 Merge branch 'refs/heads/fuzeman-feature/dev_plex' into develop 2013-10-13 13:56:38 +02:00
Ruud
97b6cf013f Merge branch 'feature/dev_plex' of git://github.com/fuzeman/CouchPotatoServer into fuzeman-feature/dev_plex 2013-10-13 13:56:26 +02:00
Ruud
e1a6b813a5 Merge branch 'refs/heads/mano3m-develop_fix_dashboard' into develop 2013-10-13 13:45:45 +02:00
Ruud
b0e30921ae Merge branch 'develop_fix_dashboard' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_fix_dashboard 2013-10-13 13:45:27 +02:00
Ruud
f4c4f013da Cleanup searcher and release checking 2013-10-13 13:44:26 +02:00
Ruud
43ef982d95 Merge branch 'refs/heads/fuzeman-dev_searcher' into develop 2013-10-13 12:57:43 +02:00
Ruud
d930bc4afd Merge branch 'dev_searcher' of git://github.com/fuzeman/CouchPotatoServer into fuzeman-dev_searcher 2013-10-13 12:57:22 +02:00
Kevin Carter
6dbdd4c0be Load lsb init-functions so that status_of_proc is available 2013-10-13 12:50:28 +02:00
Ruud
93bd75acc8 Make iframe https 2013-10-12 23:12:45 +02:00
Dean Gardiner
bdeace8a68 New clients added that aren't in the current client cache now trigger a reload if the list isn't "stale" yet. 2013-10-13 03:00:52 +13:00
Dean Gardiner
efdf70acb2 When notifications fail to send the client list is automatically reloaded in case the client address has changed. 2013-10-13 02:52:55 +13:00
Dean Gardiner
d31ca2677e Cleaned up Plex notifications plugin. 2013-10-13 02:26:35 +13:00
mano3m
3a117b6077 Make sure movies are removed from dashboard 2013-10-12 13:42:48 +02:00
mano3m
6d2889f88d Fix releases missing from Snatched&Available
Fixes #1958
2013-10-12 13:42:30 +02:00
Ruud Burger
213b03589a Merge pull request #2339 from cicavey/develop
Changed MIME type of JSONP requests to text/javascript
2013-10-12 04:26:08 -07:00
cicavey
79fd5fe332 Changed MIME type of JSONP requests to text/javascript 2013-10-12 07:11:37 -04:00
Ruud Burger
25a5b72d26 Merge pull request #2331 from fuzeman/feature/dev_rtorrent
rTorrent Downloader - fixes to scgi on Python 2.6
2013-10-12 03:07:31 -07:00
Dean Gardiner
8970e7fbba Fix to Searcher.createReleases (media_id doesn't exist yet) 2013-10-12 15:24:06 +13:00
Dean Gardiner
e96724beaf Fix to MovieSearcher.single to set default media type as types aren't in develop yet. 2013-10-12 15:11:46 +13:00
Dean Gardiner
73d7d01ae4 Fixed ResultList.append call to 'movie.searcher.correct_movie' instead of 'searcher.correct_release' 2013-10-12 15:10:26 +13:00
Dean Gardiner
34c69786de Merge base/movie searcher changes from branch 'tv' into develop 2013-10-12 14:25:00 +13:00
Dean Gardiner
8587b9b780 Updated rTorrent library - MethodError exceptions when calling group methods should be fixed. 2013-10-11 13:33:20 +13:00
Dean Gardiner
b9f88f431b Updated rTorrent library and fixed call to MethodError.message (should be MethodError.msg) in _update_provider_group 2013-10-11 04:12:36 +13:00
Dean Gardiner
df90ee0a55 Updated rtorrent library - scgi fix for Python 2.6 2013-10-10 15:58:35 +13:00
Ruud Burger
32a4075979 Merge pull request #2326 from fuzeman/feature/dev_rtorrent
rTorrent Downloader - scgi support
2013-10-09 08:08:04 -07:00
Ruud
99606e22d6 Make YIFY a imdbid search. fix #2323 2013-10-09 16:45:45 +02:00
Ruud
5fd0253089 Import Media, not Movie. fix #2320 2013-10-09 16:37:16 +02:00
Ruud
a46241bb9f Better year name guessing. #2323 2013-10-09 16:36:13 +02:00
Dean Gardiner
a8087c8ce9 Updated rTorrent downloader options 2013-10-09 23:07:14 +13:00
Dean Gardiner
0a90ad5db7 Updated rtorrent library to current master - scgi:// support 2013-10-09 22:24:22 +13:00
Ruud
75bda46f64 Userscript styling fixes 2013-10-08 21:53:03 +02:00
Ruud
a0d2a64e57 Userscript didn't load properly 2013-10-08 21:51:34 +02:00
Ruud
d1c3f0c241 Use Media for all Movie db actions 2013-10-08 09:57:36 +02:00
Ruud
107606ce65 Add tv branch column aliases 2013-10-08 09:57:17 +02:00
Ruud
32646d0608 Use movie instaid of media model 2013-10-08 09:22:05 +02:00
Ruud
eabd2b6c41 Rename mediaplugin 2013-10-08 09:21:53 +02:00
Ruud
b8ac093182 Remove refresh from movie media
Conflicts:
	couchpotato/core/media/movie/_base/main.py
2013-10-08 09:15:41 +02:00
Ruud
bac3055726 Move media refresh to media plugin 2013-10-08 08:46:32 +02:00
Ruud
5e683b5a48 Revert "TorrentBytes login url change. fix #2317"
This reverts commit 95d0dacd28.
2013-10-07 23:43:08 +02:00
Ruud
955814397a Revert "TorrentBytes login url change. fix #2317"
This reverts commit 95d0dacd28.
2013-10-07 23:38:53 +02:00
Ruud
10fe175ff5 Move suggestions to movie folder 2013-10-07 22:52:05 +02:00
Ruud
bca4a2e241 Move search item to movie folder 2013-10-07 22:51:23 +02:00
Ruud
3925d4c215 Make search work for multiple media types 2013-10-07 21:23:09 +02:00
Ruud
8ca5c62575 YIFY use IMDB id for search. fix #2313 2013-10-07 15:52:25 +02:00
Ruud
f178825d21 Merge branch 'refs/heads/develop' 2013-10-07 09:20:57 +02:00
Ruud
95d0dacd28 TorrentBytes login url change. fix #2317 2013-10-07 09:20:01 +02:00
Ruud
b6f850dc27 in_ needs list.. 2013-10-03 08:30:13 +02:00
Ruud
38ce63795c Check snatched with single query 2013-10-03 08:26:02 +02:00
Ruud
bbf42da875 ILoveTorrents cleanup 2013-09-30 22:18:36 +02:00
salfab
8df0ecc223 disabled by default 2013-09-30 21:55:33 +02:00
salfab
c37bf12c8a improve resilience to retrieve description in get_more_info 2013-09-30 21:55:29 +02:00
salfab
83051b2576 support getting more info. 2013-09-30 21:55:24 +02:00
salfab
75360f734c use a proper name, instead of the link 2013-09-30 21:55:20 +02:00
salfab
87754047fa torrents are found and appended to the results argument 2013-09-30 21:55:16 +02:00
salfab
f121db059e add new provider for ILT. 2013-09-30 21:55:08 +02:00
Ruud
c9e693287c Merge branch 'refs/heads/mano3m-develop_release' into develop 2013-09-30 20:52:51 +02:00
Ruud
0876d1ff8e Rename release.update to update_status 2013-09-30 20:52:04 +02:00
Ruud
6bbcc5af77 Merge branch 'develop_release' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_release 2013-09-30 20:31:50 +02:00
Ruud Burger
6a9f6a6fc8 Merge pull request #2099 from mano3m/develop_folder
Remove all empty folders after rename
2013-09-30 11:24:21 -07:00
Ruud Burger
1da3546f2d Merge pull request #2270 from fuzeman/feature/dev_rtorrent
rTorrent Downloader fixes
2013-09-30 11:23:00 -07:00
Ruud Burger
d233425a77 Merge pull request #2272 from fuzeman/feature/dev_plex
Fixed Plex notifications on latest PHT
2013-09-30 11:22:26 -07:00
Ruud
8883d505ba Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-09-30 20:12:31 +02:00
Ruud Burger
c51d806840 Merge pull request #2282 from mano3m/develop_encryptedasfailed
Consider encrypted as failed fix #2260
2013-09-30 11:12:08 -07:00
Ruud
13a0c4607d Merge branch 'refs/heads/jkaberg-develop' into develop 2013-09-30 20:11:51 +02:00
mano3m
fd8e50b533 [SabNZBd] Consider encrypted as failed 2013-09-30 20:05:34 +02:00
Ruud Burger
682216dcf4 Merge pull request #2281 from mano3m/develop_seedfix
Fix seeding status check #2278
2013-09-30 10:44:43 -07:00
mano3m
6bda5f5b03 Don't use movie done status to check seeding
Fixes #2278
2013-09-30 19:34:12 +02:00
mano3m
6174f121c8 fix log message 2013-09-30 19:27:11 +02:00
mano3m
89daa836e7 Remove all empty folders
Quite often there is a subfolder in the movie folder after extraction.
This folder is deleted but the actual movie folder remains behind. This
update fixes that in both cases: move_folder is known, or we work in the
'from' folder.
2013-09-30 19:24:46 +02:00
mano3m
7c5616cc79 fix colour order 2013-09-30 19:16:19 +02:00
mano3m
27fdbff619 Set missing to ignored after 1 week 2013-09-30 19:16:13 +02:00
mano3m
516447a104 Remove movie_dict 2013-09-30 19:16:08 +02:00
mano3m
0c6c172d6a Update movie quality status colour and text
It isnt perfect this way. I think we need to add a sepperate function to
do this and call that from both when CPS is loading the page and when it
updates a release (e.g. just rebuild the icons)
2013-09-30 19:16:01 +02:00
mano3m
d11f9d26c0 Add missing status 2013-09-30 19:15:51 +02:00
mano3m
a2cb0ec8ad frontend release.update 2013-09-30 19:15:44 +02:00
mano3m
1bddadf3a4 clean-up searcher 2013-09-30 19:15:30 +02:00
mano3m
f0f843f746 Add release.update event
Proof of concept commit.

It updates the database and calls movie.update.id to refresh the entire movie in the frontend. It would be better to crease a static js file in the release folder and add release functionality there including updating one release only.
2013-09-30 19:15:10 +02:00
Joel Kåberg
317a1f119b not needed 2013-09-29 18:03:52 +02:00
Joel Kåberg
b128ef17c9 Added directory option
and an option to append label to directory path
2013-09-29 15:32:23 +02:00
Ruud
cc4350b0f9 NZBGet missing in wizard. fix #2262 2013-09-29 14:05:28 +02:00
Ruud
fe2290fccb Merge branch 'refs/heads/develop' 2013-09-29 14:00:20 +02:00
Dean Gardiner
0b00f2d9e6 Fixed Plex notifications on latest PHT (protocol renamed to 'plex') 2013-09-30 00:49:00 +13:00
Ruud
e7aa91b3e1 Don't try to use custom_plugins when folder doesn't exist 2013-09-29 13:44:52 +02:00
Ruud
333abd2486 Custom plugin folder outside source. fix #2076 2013-09-29 13:25:10 +02:00
Dean Gardiner
226835e3d0 Added a check to ensure a torrent has been loaded (and found). 2013-09-29 23:32:03 +13:00
Dean Gardiner
48db4c8b8e Updated rtorrent-python library 2013-09-29 23:21:53 +13:00
Ruud
ae4e15286a Don't try to loop over None. fix #2268 2013-09-29 12:17:09 +02:00
Ruud
1b96489656 Merge branch 'refs/heads/jkaberg-develop' into develop 2013-09-29 10:06:22 +02:00
Ruud
99c899ea3a Proper variable naming 2013-09-29 10:06:12 +02:00
Ruud
8f76dd7a2e Merge branch 'develop' of git://github.com/jkaberg/CouchPotatoServer into jkaberg-develop 2013-09-29 09:57:17 +02:00
Ruud
1f2c2269e6 Ignore thumbs.db files and don't fail on single path split. fix #2265 2013-09-29 09:54:37 +02:00
Joel Kåberg
201185f7e7 better english damnit! 2013-09-29 01:49:51 +02:00
Joel Kåberg
e38d68c019 actual code 2013-09-29 01:45:50 +02:00
Joel Kåberg
91332e06e5 add option to create sub directory 2013-09-29 01:45:24 +02:00
Ruud
96b4af1fea Hide first item in combined table 2013-09-29 00:08:26 +02:00
Ruud
e4d67645b7 Merge branch 'refs/heads/develop' 2013-09-28 23:43:49 +02:00
Ruud
b4bccc9be2 Flixter automation support
Thanks @mikedm139
2013-09-28 23:41:15 +02:00
Ruud
d6ddee236a Merge branch 'refs/heads/mano3m-develop_bluray' into develop 2013-09-28 23:17:42 +02:00
Ruud
364e355114 Also try to load the root module for each path 2013-09-28 21:25:25 +02:00
Ruud
7d4f9d60b1 Code formating 2013-09-28 19:17:41 +02:00
Ruud
116bc839fc Make description more clear 2013-09-28 19:12:05 +02:00
Ruud
153d4b2b1d Merge branch 'develop_bluray' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_bluray 2013-09-28 18:20:47 +02:00
Ruud
2f4f140662 Don't overwrite data variable in utorrent download. fix #2222 2013-09-28 18:19:17 +02:00
Ruud
475ac1bb9c Only use filename for identification when possible. fix #2233 & #954 2013-09-28 18:06:45 +02:00
Ruud
49015b7d64 Be sure to ss quality alt in guess 2013-09-28 17:45:32 +02:00
Ruud
99efcce4d0 Merge branch 'refs/heads/techmunk-2235' into develop 2013-09-28 17:04:00 +02:00
Ruud
c3c971db23 Merge branch '2235' of git://github.com/techmunk/CouchPotatoServer into techmunk-2235 2013-09-28 17:03:27 +02:00
Ruud
8011634b7a Use correct encoding for emails. fix #2254 2013-09-28 16:39:31 +02:00
Ruud
ededfcb822 Escape spaces for each request. fix #2256 2013-09-28 16:28:46 +02:00
Ruud
92a0af5ce3 Use label for quality guess also. closes #2237 2013-09-28 15:23:45 +02:00
Ruud
ffaffbc66f Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-09-28 14:31:47 +02:00
Ruud
2596bbe2bc Merge branch 'refs/heads/saxicek-tsh_scene_only' into develop 2013-09-28 14:30:27 +02:00
Ruud
3310bdf551 Don't use quotes for torrentshack 2013-09-28 14:30:20 +02:00
Ruud Burger
19d357b866 Merge pull request #2261 from mano3m/develop_transmission
[Transmission] Fix  #2168
2013-09-28 04:49:31 -07:00
mano3m
871aecb689 Fix transmission #2168 2013-09-28 13:35:26 +02:00
mano3m
00bb055474 set backlog to False after backlog search 2013-09-28 12:36:43 +02:00
mano3m
f10d182468 Added Blu-ray.com backlog automation
I missed a few movies, so I added backlog functionality to Blu-ray.com

If you want to add all Blu-rays that ever came out to the wanted list,
you can use this. Be careful with what you wish for :D
2013-09-28 12:36:43 +02:00
Techmunk
74a4e7d19d Indenting on deluge auth fix was incorrect. 2013-09-27 14:59:03 +10:00
sax
c7c64c6002 Changed implementation of "scene_only" parameter to use filter criteria instead of parsing the information from query result. 2013-09-25 14:05:16 +02:00
Techmunk
8474d0d95d Fix the way the client auth file is found and processed to match the defaults in the deluge clients. 2013-09-25 21:44:05 +10:00
Ruud
4a5c878c36 Wrong config name for plex host 2013-09-24 22:44:14 +02:00
Ruud
2b0a70355a Merge branch 'refs/heads/fuzeman-feature/dev_plex' into develop 2013-09-24 22:37:46 +02:00
Ruud
9b5166826f Cleanup Plex notification 2013-09-24 22:37:40 +02:00
Ruud
3b1efb2c30 Merge branch 'feature/dev_plex' of git://github.com/fuzeman/CouchPotatoServer into fuzeman-feature/dev_plex
Conflicts:
	couchpotato/core/notifications/plex/main.py
2013-09-24 21:35:36 +02:00
Ruud
8d108b92bf One Up 2013-09-23 21:48:12 +02:00
Ruud
46783028b1 Merge branch 'refs/heads/develop' into desktop 2013-09-23 21:36:45 +02:00
Ruud
324415be15 Merge branch 'refs/heads/develop' 2013-09-23 21:35:51 +02:00
Ruud
b5d2a41d60 Enable NewzNab bij default 2013-09-23 21:35:40 +02:00
Ruud
cc3aad49ed Remove FTDWorld 2013-09-23 21:35:29 +02:00
Ruud
7c44f9ab13 Merge branch 'refs/heads/develop' 2013-09-23 21:30:19 +02:00
Ruud
2365e1859f Don't show suggestions if there aren't any. fix #2153 2013-09-22 10:47:13 +02:00
Ruud
03700e0a04 Userscript image didn't show 2013-09-22 00:43:50 +02:00
Ruud
1ff4901846 Make sure to remove listener, even after fail 2013-09-21 22:29:15 +02:00
Ruud
d70a71a12e Make nonblock debug message 2013-09-21 22:17:01 +02:00
Ruud
866d9621cb Create new listener list 2013-09-21 22:16:44 +02:00
Ruud
2d3fc03a00 Revert back to UTF8 when ss encoding fails. fix #2220 2013-09-21 13:56:17 +02:00
Ruud
19f782e4a5 Don't try to change elements that don't exist. fix #2219 2013-09-21 12:41:06 +02:00
Ruud
fdd851d29a Binsearch age parse failed for release new than 1 day. fix #2217 2013-09-21 12:14:40 +02:00
Ruud
6cd38a3469 Providers missing in wizard 2013-09-21 11:20:53 +02:00
Ruud
bfa3b87188 Only show soon and late with no releases 2013-09-21 11:07:16 +02:00
Ruud
628fda2097 Merge branch 'refs/heads/develop' 2013-09-20 18:15:28 +02:00
Ruud
69a9fa1193 Simplify string before checking on imdb 2013-09-20 18:08:27 +02:00
Ruud
9e0805ec89 Hide IE clear button on search 2013-09-20 18:08:12 +02:00
Ruud
d08c7c57a8 One up! 2013-09-20 17:46:54 +02:00
Ruud
eeeb845ef3 Simplify string before checking on imdb 2013-09-20 17:30:11 +02:00
Ruud
651a063f94 Fix about submenu 2013-09-20 16:33:01 +02:00
Ruud
f20aaa2d9d Hide IE clear button on search 2013-09-20 16:23:42 +02:00
Ruud
ba925ec191 Merge branch 'refs/heads/develop' into desktop
Conflicts:
	couchpotato/core/plugins/suggestion/main.py
2013-09-20 16:12:40 +02:00
Ruud
f67c6fe8be Only remove images from cache folder on cleanup 2013-09-20 16:07:18 +02:00
Ruud
8d38fa87a4 Copy unrar dll to cache folder. fix #2205 2013-09-20 16:06:23 +02:00
Ruud
7c79c6d1f3 Update TorrentShack url. fix #2209 2013-09-20 12:51:58 +02:00
Ruud
b0781b45f8 Different seperator for folder and filename 2013-09-19 23:49:23 +02:00
Ruud Burger
ee53539906 Merge pull request #2163 from mano3m/develop_utorrent
Fix folder issue uTorrent
2013-09-19 14:40:16 -07:00
Ruud
c8ab6a06fb ASCII encode md5 string. closes #2167 2013-09-19 23:39:15 +02:00
Ruud
c75ac51eb7 Try the info dict to get title. fix #2206 2013-09-19 23:29:21 +02:00
Ruud
33d7d994d4 Don't try to finish an already closed connection 2013-09-19 23:16:49 +02:00
Ruud
da6d749072 Merge branch 'refs/heads/develop' 2013-09-19 22:11:21 +02:00
Ruud
96291f63da Create db backup dir before trying to use it. fix #2207 2013-09-19 22:11:10 +02:00
Ruud
bef2b28acc Merge branch 'refs/heads/develop' 2013-09-18 23:06:29 +02:00
Ruud
6464bb065d Better year guessing. fix #609 2013-09-18 23:04:54 +02:00
Ruud
8b45b6f1a0 Only backup database max once an hour. fix #1218 2013-09-18 22:07:07 +02:00
Ruud
302f571837 Merge branch 'refs/heads/develop' 2013-09-18 21:45:03 +02:00
Ruud
70ba5d80cd Trailers not downloading. fix #1563 2013-09-18 21:42:25 +02:00
Ruud
ac30152930 Don't start new long-poll right away. 2013-09-17 21:45:43 +02:00
Ruud
ad01a3da4d Update GuessIt 2013-09-17 21:04:15 +02:00
Ruud
5f5f17112a Don't try to search SceneAccess for BR-Disk. fix #2188 2013-09-17 20:48:01 +02:00
Ruud
156da670e8 Encode before checking imdb content. fix #2186 2013-09-17 20:43:41 +02:00
Ruud
821c26f35b Return default cached suggestion list. fix #2191 2013-09-17 20:39:20 +02:00
Ruud
a092f394fa Snatch next didn't pick correct element 2013-09-17 20:18:41 +02:00
Ruud
18e3194e27 Better category defaults 2013-09-16 22:37:10 +02:00
Ruud
08a1e1e582 Done use faulty None value for category 2013-09-16 22:33:45 +02:00
Ruud
074005ed02 Use existing category on re-add. fix #2182 2013-09-16 22:33:26 +02:00
Ruud Burger
7660a3d78f Merge pull request #2180 from techmunk/2107
Deluge SSL negotiation errors on Windows machines.
2013-09-16 13:09:49 -07:00
Techmunk
9211e60804 Use the actual SSLv3 constant in deluge transfer.py. 2013-09-17 00:06:35 +10:00
Techmunk
87f295be28 Fix Deluge SSL negotiation errors on Windows machines. 2013-09-16 23:12:46 +10:00
mano3m
cfa89c8921 [uTorrent] Guarantee a folder
uTorrent does not create a folder in case only one file is present in
the torrent. This is a workaround that detects torrents with one file.
It then removes the torrent and readds it with a specified subfolder.
2013-09-15 10:01:59 +02:00
Ruud
70f834d925 Gilles de la Tourette 2013-09-15 00:46:39 +02:00
Ruud
41dde209d5 Merge branch 'refs/heads/develop' 2013-09-14 11:41:45 +02:00
Ruud
6b4e4fd440 Only show login when both username and password are filled in. fix #2157 2013-09-14 11:41:16 +02:00
Ruud
b83b2453a0 not in 2013-09-12 22:50:08 +02:00
Ruud
82d31d996d Set order changes on each run. fix #2148 2013-09-12 22:29:59 +02:00
Ruud
4faa617039 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-09-12 11:08:14 +02:00
Ruud
a1d2276668 Match variable name in ubuntu init. fix #2149 2013-09-12 11:07:49 +02:00
Ruud
19c50f728e Suggestions, mark as seen. 2013-09-11 22:41:38 +02:00
Ruud
a94307c59f rTorrent import cleanup 2013-09-11 21:33:11 +02:00
Ruud
c6403e87f1 Get releases when cleaning up managed movies 2013-09-11 12:24:50 +02:00
Ruud
5d350ef5ac Merge branch 'refs/heads/develop' 2013-09-11 09:29:05 +02:00
Ruud
b56cd3439e added_identifiers needs to be mutable. fix #2140 #2141 2013-09-11 09:28:30 +02:00
Ruud
4fd1d986dd Merge branch 'refs/heads/develop'
Conflicts:
	couchpotato/static/style/main.css
2013-09-11 09:11:04 +02:00
Ruud
25693d44eb Count NONE as success for NZBGet. fix #2135 2013-09-11 09:07:32 +02:00
Ruud
43af25a30e Fix menu phone styling 2013-09-10 23:50:17 +02:00
Ruud
023278e0c0 Remove webkit button styling 2013-09-10 23:32:51 +02:00
Ruud
55d57bc07b Give minified own FileHandler 2013-09-10 23:25:05 +02:00
Ruud
a81a262fb6 Change static path 2013-09-10 23:25:05 +02:00
Ruud
c37360f848 Login styling 2013-09-10 23:25:05 +02:00
Ruud
d7700900db Login base 2013-09-10 23:25:05 +02:00
Ruud
0634c79f74 Give minified own FileHandler 2013-09-10 23:21:31 +02:00
Ruud
31b3c2ef64 Change static path 2013-09-10 22:59:31 +02:00
Ruud
4a71f2c556 Login styling 2013-09-10 22:58:41 +02:00
Ruud
9783409756 Login base 2013-09-10 18:02:04 +02:00
Ruud
faa136a365 Merge branch 'refs/heads/develop' 2013-09-10 09:39:18 +02:00
Ruud Burger
c7e85c00ca Merge pull request #2133 from mythin/fix-variable-change
Fix the variable passed to the getImdb method
2013-09-09 23:32:14 -07:00
Mythin
94647bbb57 Fix the variable passed to the getImdb method 2013-09-09 23:08:49 -07:00
Ruud
bd73b94ea4 Merge branch 'refs/heads/develop' 2013-09-09 22:29:00 +02:00
Ruud
1aa26a5a6c Replace protocol if it doesn't exist 2013-09-09 22:28:21 +02:00
Ruud
d764d0f096 Merge branch 'refs/heads/develop' 2013-09-08 22:17:03 +02:00
Ruud
df13a0edc2 Ignore modules with only .pyc files in them. 2013-09-08 22:12:08 +02:00
Ruud
52a0de3b59 Deleting from late block didn't work 2013-09-06 23:12:22 +02:00
Ruud
38886b28f7 Hide soon and late blocks on dashboard if their empty. fix #1778 2013-09-06 23:05:41 +02:00
Ruud
226cf6fc38 Make sure to not query db when there aren't any ids 2013-09-06 22:45:37 +02:00
Ruud
203a52bfd1 Don't load updater.js twice 2013-09-06 20:17:21 +02:00
Ruud
1b6bf13619 Optimize and order dashboard list 2013-09-06 20:03:34 +02:00
Ruud
bc94e90994 Optimize available char listing 2013-09-06 19:37:39 +02:00
Ruud
347125365f movie.list didn't keep order 2013-09-06 19:19:20 +02:00
Ruud
59a718be20 Optimize events with single handler 2013-09-06 00:41:15 +02:00
Ruud
c41b3a612a Optimize dashboard soon listing 2013-09-06 00:24:17 +02:00
Ruud
23f77df911 Optimize profile queries 2013-09-06 00:23:52 +02:00
Ruud
117b952455 Default back to type on protocol. fix #2120 2013-09-05 21:46:00 +02:00
Ruud
7714504831 Run dashboard calls serial 2013-09-04 23:20:03 +02:00
Ruud
5c61c24c04 Lazyload file list in manage tab 2013-09-04 22:39:42 +02:00
Ruud
b11e1d48e0 Suggestion listing: load library in single query 2013-09-04 22:30:32 +02:00
Ruud
a6ce114284 Optimize suggestion listing 2013-09-04 22:30:32 +02:00
Ruud
88d512eacc Don't try to use releases when there aren't any 2013-09-04 22:30:32 +02:00
Ruud
f4d5366c93 Remove profile from dashboard list 2013-09-04 22:30:32 +02:00
Ruud
ac9aaec7b8 Optimize movie.list 2013-09-04 22:30:32 +02:00
Ruud
0c5b950c87 Add manual to tryNextRelease 2013-09-04 22:30:32 +02:00
Ruud
47141f8e4f Api: added release.for_movie
Get all releases for a single movie
2013-09-04 22:30:32 +02:00
Ruud
ec302fe665 Make sure that a faulty api call end after error 2013-09-04 13:46:51 +02:00
Ruud
7f304b0c28 Don't load profile on movie list 2013-09-03 22:50:27 +02:00
Ruud
8f88f7d89b Javascript and css cleanup 2013-09-03 22:13:42 +02:00
Ruud
400fd461ab Always add timestamp to registered statics 2013-09-03 21:12:22 +02:00
Ruud
cd8d2d4808 PublicHD description cache timeout 2013-09-03 20:23:40 +02:00
Ruud
4cfa79488f PublicHD cache description call 2013-09-03 20:21:49 +02:00
Ruud
b5993bcc21 NonBlock calls need to finish 2013-09-03 19:14:59 +02:00
Ruud
6af00bf026 Standardize cache_key generation 2013-09-03 12:48:24 +02:00
Ruud
97c456c9e1 Optimize quality caching 2013-09-03 12:47:44 +02:00
Ruud
08f44197f3 Use own cache 2013-09-03 12:14:02 +02:00
Ruud
779c7d2942 Remove mutable objects from function args 2013-09-02 22:44:44 +02:00
Ruud
7fd14e0283 Code cleanup 2013-09-02 21:59:06 +02:00
Ruud
7d32a8750d type > protocol 2013-09-02 16:53:39 +02:00
Ruud
110e0b78fc Merge branch 'file_extension' of git://github.com/DarthNerdus/CouchPotatoServer into DarthNerdus-file_extension 2013-09-02 16:51:17 +02:00
Ruud
bc77812488 Copy file and maybe copy stats. fix #349 2013-09-02 16:49:57 +02:00
Ruud
3e28cd5c95 local ip checking helper 2013-09-02 15:27:18 +02:00
Ruud
2715dbaaa5 Don't do failed checking on local requests 2013-09-02 15:27:06 +02:00
Ruud
3baf12d3e4 Make sure cleanhost only has one trailing slash 2013-09-02 14:54:54 +02:00
Ruud
a428d36604 Wrap requests in try for better failing
Or would it be worse failing?
2013-09-02 14:35:05 +02:00
Ruud
b5207bc88c Return releasedate as string 2013-09-02 14:27:16 +02:00
Ruud
910578a2ac Use TheMovieDB v3 api 2013-09-02 14:10:31 +02:00
Ruud
88176997e7 Don't use year if it's the first in the identified string. fix #1815 2013-09-02 00:00:27 +02:00
Ruud
233e6f9be0 Movie class wasn't remove on delete cancel. fix #1962 2013-09-01 23:33:24 +02:00
Ruud
1fd11fb547 Don't show delete dialog for category if it doesn't exist yet. fix #1961 2013-09-01 23:28:55 +02:00
Ruud
8bfd206578 Option to disable direct searching on adding. closes #2054 2013-09-01 23:18:12 +02:00
Ruud
62c6fd2e40 Don't error out on faulty PublicHD page. fix #2014 2013-09-01 23:05:28 +02:00
Ruud
ac2d2a0463 Always search on empty release dates. fix #2035 2013-09-01 22:51:59 +02:00
Ruud
c1e4b47b99 Return category by default. fix #2073 2013-09-01 18:21:53 +02:00
Jesse Read
32b479467a Fix missed type/protocol change. Fixes torrents being created as .movie files. 2013-08-31 20:45:37 -04:00
Ruud
6cab2b34d6 Continue after empty folder while loading plugins 2013-09-01 02:10:31 +02:00
Ruud
9e744199fe Make sure messages isn't empty 2013-09-01 01:44:47 +02:00
Ruud
b22021e7f0 Try next log remove, don't stop 2013-09-01 00:43:53 +02:00
Ruud
68bdf47ea4 Use protocol, not type for sorting 2013-09-01 00:31:47 +02:00
Ruud
af2876bd71 Lock same api routes 2013-09-01 00:24:47 +02:00
Ruud
1e5d6bad2a Lock while editing listeners 2013-09-01 00:24:18 +02:00
Ruud
f6c836157d Movie db to bottom in scanner 2013-09-01 00:22:22 +02:00
Ruud
d10874f216 Video object on iPad doesn't listen to z-index. fix #2093 2013-08-31 19:22:32 +02:00
Ruud
700713abcf Don't try to use undefined response 2013-08-31 17:48:19 +02:00
Ruud
5180426fc1 Remove debug print 2013-08-31 17:09:23 +02:00
Ruud
e1c8a08f2f Run api requests in own thread 2013-08-31 17:07:46 +02:00
Ruud
16f0bcc3ac Don't run handler if it doesn't exist.. 2013-08-31 17:04:53 +02:00
Ruud
9c98a38604 Tornado update 2013-08-31 15:59:47 +02:00
Ruud
1b03c7e474 Use finish instead of write 2013-08-31 15:32:45 +02:00
Ruud
689feb78d0 Torrentshack missin category for pre-dvd releases. fix #2083 2013-08-31 14:33:30 +02:00
Ruud
336b15b199 Deluge import cleanup 2013-08-30 19:21:31 +02:00
Ruud
4a4bb819ec Merge branch 'deluge' of git://github.com/techmunk/CouchPotatoServer into techmunk-deluge 2013-08-30 18:40:35 +02:00
Techmunk
48be010f33 Fix up some debug messages, and the torrent completed status. 2013-08-30 10:25:58 +10:00
Techmunk
104e21b314 Fix for deluge downloading torrent files. 2013-08-28 20:41:02 +10:00
Ruud
aaf5cab138 Encode folder returned from downloader. fix #2071 2013-08-27 23:38:51 +02:00
Ruud
22b744340a Properly remove backup folder 2013-08-27 22:25:56 +02:00
Techmunk
2954558004 Fix up deluge is Finished status matching. 2013-08-27 20:13:29 +10:00
Ruud
b797590a4e Make sure extr_files exists 2013-08-25 20:16:08 +02:00
Ruud
9d71fe1724 Deluge proper error logging. fix #2069 2013-08-25 12:24:15 +02:00
Ruud
9ad0ed642d Don't use type yet. fix #2068 2013-08-25 12:07:13 +02:00
Ruud
cbd217271d Don't load options twice 2013-08-25 00:59:37 +02:00
Ruud
65896497fb Return true for loader 2013-08-24 20:22:31 +02:00
Ruud
54a37b577d Import cleanup
Conflicts:
	couchpotato/core/providers/torrent/sceneaccess/main.py
2013-08-24 20:15:54 +02:00
Ruud
f1948ffb6a Just load media recursively 2013-08-24 20:12:59 +02:00
Jason Mehring
7dd3b0ed15 fix loader error messages for modules that are selected recursively but are not really modules 2013-08-24 20:07:32 +02:00
Jason Mehring
11fcfa8202 Moved library and refactored to its now location. Modified anything firing libray.add/update/_release date to now fire library.add.movie...
Conflicts:
	couchpotato/core/loader.py
	couchpotato/core/media/show/_base/main.py
	couchpotato/core/media/show/library/season/main.py
2013-08-24 20:04:27 +02:00
Ruud
199e61ea14 Fallback on type for current downloads 2013-08-24 16:37:16 +02:00
Ruud
0daa6c8eff Merge branch 'develop_unrar_fixes' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_unrar_fixes 2013-08-24 16:16:48 +02:00
Ruud
b1b5f97f03 Deluge fixes 2013-08-24 16:14:18 +02:00
Ruud
32d5587669 Don't load modules without __init__.py 2013-08-24 16:06:17 +02:00
mano3m
c13c0f24e5 Change type to protocol in release and renamer 2013-08-24 15:50:19 +02:00
mano3m
7eb1d72333 remove move exception from unrar PR 2013-08-24 15:50:19 +02:00
Ruud
3d6ec1feba Move info providers to proper folder 2013-08-24 15:31:30 +02:00
Ruud
c267232160 Add unrar support
Thanks @mano3m
2013-08-24 15:04:56 +02:00
Ruud
48f4b008df Move deluge lib to libs folder 2013-08-24 14:46:46 +02:00
Ruud
ae1f181fbf Merge branch 'deluge' of git://github.com/techmunk/CouchPotatoServer into techmunk-deluge
# Please enter a commit message to explain why this merge is necessary,
# especially if it merges an updated upstream into a topic branch.
#
# Lines starting with '#' will be ignored, and an empty message aborts
# the commit.
2013-08-24 14:42:17 +02:00
Ruud
cbfee72d51 rTorrent make pause advanced setting 2013-08-24 14:38:57 +02:00
Ruud
ee709054f2 rTorrent rename type to protocol
code styling
2013-08-24 14:35:57 +02:00
Ruud
ee60ec962b Merge branch 'feature/dev_rtorrent' of git://github.com/fuzeman/CouchPotatoServer into fuzeman-feature/dev_rtorrent 2013-08-24 14:33:17 +02:00
Ruud
e013e38c5e Update ubuntu.init
Thanks @moriame
2013-08-24 14:26:16 +02:00
Ruud
20aa78105f Do window size check inside load event 2013-08-24 14:22:15 +02:00
Ruud
770590e4f2 Match default ports
Thanks @cpg
2013-08-24 14:08:05 +02:00
Ruud
8e9e7b49ea Simplify linking
Thanks @mano3m
2013-08-24 14:03:17 +02:00
Ruud
08554889fd Add the old rottentomatoes to default enabled list 2013-08-24 13:34:45 +02:00
Ruud
8ac2869de3 Merge branch 'rotten_tomatoes_custom_urls' of git://github.com/Lordcrash/CouchPotatoServer into Lordcrash-rotten_tomatoes_custom_urls 2013-08-24 13:28:10 +02:00
Ruud
bb8e8a0df5 Merge branch 'develop_seed_fixes' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_seed_fixes 2013-08-24 13:22:29 +02:00
Ruud
e2bd6a91cd MPAA rating for renamer 2013-08-24 13:21:39 +02:00
Ruud
ed0e5ef497 XMBC notification, better remote folder description 2013-08-24 12:24:15 +02:00
Ruud
e1e475e605 Merge branch 'develop_XBMC' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_XBMC 2013-08-24 12:19:32 +02:00
Ruud
cef5b04eb1 Return unique imdb list 2013-08-24 12:14:15 +02:00
Ruud
7e44af936d Watch shutdown when adding automation movies 2013-08-24 12:14:02 +02:00
Ruud
6aec5a9a60 Cleanup IMDB provider 2013-08-24 12:13:45 +02:00
Ruud
79c75c886b Merge branch 'develop_automationIMDB' of git://github.com/dkboy/CouchPotatoServer into dkboy-develop_automationIMDB 2013-08-24 10:59:32 +02:00
mano3m
bf6bcaed72 provide more info in case no movie is found
Several users reported an issue with "more than one group found (0)",
and it was unclear to them what it meant. This might help.
2013-08-22 21:20:02 +02:00
mano3m
70bc2a6656 use right variable for pause
fixes #2049
2013-08-21 20:59:39 +02:00
mano3m
695cdea447 Remove 'move' exception
No need to remove files when 'move' is selected as the downloaders do
this themselves now when cleaning up
2013-08-21 20:59:38 +02:00
mano3m
d0735a6d58 Add failsafe for symlink errors
E.g. on Windows you need Admin rights to symlink...
2013-08-21 20:59:38 +02:00
mano3m
175c26bea9 Fix untagDir and hastagDir
Changes in commit 8a252bff64 broke the
tagging functionality
2013-08-21 20:59:23 +02:00
Techmunk
8a298edd4e Implementation of Deluge downloader. 2013-08-21 23:52:54 +10:00
Ruud
9860a1c138 Default to movie type 2013-08-18 13:17:40 +02:00
Ruud
3dff598d03 Add multiprovider for provider grouping 2013-08-18 11:48:00 +02:00
Ruud
62b571d5f1 Rename type to protocol 2013-08-18 11:47:54 +02:00
Ruud
3af6623a91 Move registerPlugin to __new__ magic 2013-08-18 11:47:49 +02:00
Ruud
c73ed8a4c5 Add multiple categories for BRRIP on TPB. fix #2025 2013-08-16 20:05:30 +02:00
Ruud
4d5ba65254 Migrate options 2013-08-16 17:23:40 +02:00
Ruud
91856f1159 Searcher base
Re-usable cronjob code
2013-08-16 16:52:12 +02:00
Ruud
f7da408f83 Searcher conf section 2013-08-16 10:21:44 +02:00
Ruud
2824c55231 Give moviesearcher a unique name 2013-08-15 23:52:48 +02:00
Ruud
874655846c Move movie plugin to media folder 2013-08-15 23:52:43 +02:00
Ruud
1620acedb1 Move movie to new media type folder 2013-08-15 23:52:37 +02:00
Ruud
6395e5dbbb Cleanup console log 2013-08-15 23:52:16 +02:00
Ruud
251d9cdb8a Placeholder for preferred words 2013-08-15 18:47:57 +02:00
Ruud
623571acbb Make category destination editable 2013-08-15 18:31:06 +02:00
Ruud
250f07ffa7 Optimize dashboard query 2013-08-14 16:55:57 +02:00
Ruud
8917d7c16c Optimize movie.list query 2013-08-14 16:47:59 +02:00
Ruud
d759280c18 Don't update library items on shutdown 2013-08-14 12:31:41 +02:00
Ruud
67bc3903d4 Don't show loader for scanner if page isn't loaded yet 2013-08-14 12:20:38 +02:00
Ruud
cf6f83a44b Option to disable manage scan at startup. fix #1951 2013-08-14 12:14:52 +02:00
Ruud
4b15563ba3 Don't use in_progress when it isn't set 2013-08-14 12:13:52 +02:00
Ruud
dc36e15448 Don't run multiple manage.progress requests 2013-08-14 11:56:08 +02:00
Ruud
0b6330e98b Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-08-13 20:56:46 +02:00
Ruud
2e93687bb4 Don't try to loop over empty enablers 2013-08-13 17:46:41 +02:00
Ruud
0f925a466a Also ignore __ when importing folders 2013-08-13 17:31:12 +02:00
Ruud
16eeeda787 Ignore folder include with __ at beginning 2013-08-13 17:25:24 +02:00
Ruud
52f1df98bb Don't try to split on empty string 2013-08-13 16:51:46 +02:00
Ruud
a0ccff23a3 Remove duplicate spaces 2013-08-13 16:08:34 +02:00
Ruud
b8bed627a8 Add possible title with some char replacements 2013-08-13 16:08:21 +02:00
Ruud
8d058d9dc8 Add hdscr to screener quality 2013-08-13 15:45:05 +02:00
Ruud
57e92ff8d3 Optimized frontend notifications 2013-08-13 15:40:56 +02:00
Ruud
6eff724f97 Clean nonblocking requestshandler 2013-08-13 15:36:11 +02:00
Ruud Burger
55c3fe503b Merge pull request #1985 from mano3m/develop_nzbget
Fix NZBGet url issue
2013-08-12 01:21:41 -07:00
Ruud Burger
7f1ac63c58 Merge pull request #2005 from mano3m/develop_sorting
Regard torrents and torrent_magnet the same
2013-08-12 01:08:05 -07:00
Dean Gardiner
2bb2e28f91 Updated rTorrent library and fixed some issues with ratio setup. 2013-08-12 15:32:15 +12:00
Dean Gardiner
0bdffc5036 Change to ratio group setup to ensure everything is set correctly. 2013-08-12 15:32:14 +12:00
Dean Gardiner
7202fbf084 Removed stop_complete option, Can instead be disabled by setting seed_ratio to zero on the provider. 2013-08-12 15:32:13 +12:00
Dean Gardiner
317c3afb7a Few minor fixes and implemented delete_files option via shutil.rmtree 2013-08-12 15:32:13 +12:00
Dean Gardiner
577baeca59 Hiding remove files in the rTorrent downloader until it's implemented. 2013-08-12 15:32:12 +12:00
Dean Gardiner
7c680cac10 Updated rTorrent downloader to set ratio stop action, added new seeding methods and updated the rTorrent library 2013-08-12 15:32:11 +12:00
Dean Gardiner
0fadbd52a3 Cleaned up imports and added support for downloading magnet torrents via sources. 2013-08-12 15:32:10 +12:00
Dean Gardiner
38e204dfe8 Added support for labels on the rtorrent downloader. 2013-08-12 15:32:10 +12:00
Dean Gardiner
bf62653531 Added missing 'folder' parameter on the rtorrent downloader to fix moving/linking issues. 2013-08-12 15:32:09 +12:00
Dean Gardiner
d851be41d3 Updated rtorrent-python library. 2013-08-12 15:32:08 +12:00
Dean Gardiner
3bd1875321 Added initial rtorrent downloader, currently testing, possibly has some bugs. 2013-08-12 15:32:00 +12:00
mano3m
448c1d69a7 Regard torrents and torrent_magnet the same
When sorting the torrents and torrent_magnets were sorted, by taking
only the three first characters (as 'nzb; is three chars), the score
prevails. Fixes #2004
2013-08-11 00:06:07 +02:00
Ruud
c99a5cb535 Don't autoadd when already in wanted 2013-08-07 20:06:30 +02:00
Dean Gardiner
b824ef93bd Fix plex notifications test method. 2013-08-04 15:39:02 +12:00
mano3m
0492e90d6f XBMC: properly check if host is local
And added option to scan if remote
2013-08-03 01:52:20 +02:00
Micah James
4ffda9f705 Made code more python-y per mano3ms recommendation. 2013-08-01 23:15:36 -04:00
mano3m
b32d4fc42d Fix NZBGet url issue 2013-08-01 23:24:25 +02:00
Dean Gardiner
c92aa91aa7 Corrected notify() force parameter default. 2013-08-02 02:43:55 +12:00
Dean Gardiner
a6c32a7e30 Fixed Plex notifications
Conflicts:

	couchpotato/core/notifications/plex/main.py
2013-08-02 02:43:37 +12:00
Micah James
4330dc39bf Changed description to be better suited for this. 2013-07-31 23:14:58 -04:00
Micah James
da50b19b6b Added custom url code handling 2013-07-31 23:06:12 -04:00
Micah James
797018fb8a Revert "Adding more code."
This reverts commit 3a8f891c7d.
2013-07-31 22:47:52 -04:00
Micah James
3a8f891c7d Adding more code. 2013-07-31 22:45:48 -04:00
Micah James
56a788286c Adding code for custom urls UI 2013-07-31 22:41:49 -04:00
mano3m
fd95364d5f uTorrent ratio issue fixed
The tryFloat function returns 0 if it is fed with a float(!). This resulted in the seed_ratio being set to 0 on first/automatic download. When manually downloading, it did work as the ratio is stored as a string.
2013-07-31 15:04:48 +02:00
mano3m
470fde0890 Unset the uTorrent read only flags
Fix for #1871

Note that this is a fix for Windows only. I am unaware if this issue
arises on Linux/Mac and what happens with this fix on those systems.
2013-07-23 19:07:36 +02:00
Ruud
f12d878c0b Select category for search, suggest & edit 2013-07-22 21:57:13 +02:00
Ruud
e8993932c1 Check isMac function 2013-07-22 21:56:33 +02:00
Ruud
e3933e4ddc Proper meta tag 2013-07-22 21:56:22 +02:00
Ruud
dd67239b6e Add categories to settings 2013-07-21 19:12:53 +02:00
Ruud
1ea0d3bd8b Move providers to main searcher tab in settings 2013-07-21 19:12:32 +02:00
Ruud
8b952d4be6 Combine global and category words 2013-07-19 16:58:49 +02:00
Ruud
9e8a3bc701 Movie category migrate 2013-07-15 22:51:53 +02:00
Ruud
76807176fb Merge branch 'develop-categories' of git://github.com/clinton-hall/CouchPotatoServer into clinton-hall-develop-categories
Conflicts:
	couchpotato/core/plugins/score/main.py
2013-07-15 20:47:29 +02:00
iguyking
3650624e4b Update contributing.md
Fixed to say what was intended
2013-07-15 20:44:42 +02:00
Ruud Burger
585c509aba Merge pull request #1950 from mano3m/develop_rpc-url
Add rpc_url to Transmission options
2013-07-15 04:20:25 -07:00
Ruud Burger
fc8db130e0 Merge pull request #1947 from iguyking/patch-1
Update contributing.md
2013-07-15 04:17:17 -07:00
mano3m
046c7e732f Add rpc_url to Transmission options
Fixes  #1832
2013-07-14 23:43:07 +02:00
mano3m
564a27461d XBMC: Only add directory if XBMC is on localhost 2013-07-14 23:30:37 +02:00
iguyking
682d678f91 Update contributing.md
Fixed to say what was intended
2013-07-14 11:49:48 -05:00
mano3m
4ebbc1a01d XBMC: Only scan the new movie folder 2013-07-14 02:19:35 +02:00
Ruud
4ec32a6403 Merge branch 'develop_seed_fixes' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_seed_fixes 2013-07-13 17:56:07 +02:00
Ruud
412627aab0 Move rating and genres to suggestions only 2013-07-13 17:52:40 +02:00
mano3m
2584abda0e Several fixes and increased readability 2013-07-13 17:06:59 +02:00
dkboy
7692322fba Expand IMDB automation provider to include charts
Expand IMDB automation provider to include certain top charts, this
includes the 'in theaters' list, as well as the top 250 list. They both
respect the minimum requirement settings.
2013-07-13 16:45:39 +12:00
Ruud
954018fea2 Youtube trailer search in https 2013-07-12 21:03:03 +02:00
Ruud
ebf37f7310 Cleanup plex urls 2013-07-12 20:52:41 +02:00
Ruud
f22b836ede Combine adopt 2013-07-12 14:42:59 +02:00
Ruud
1cea786d66 Style rating and genres 2013-07-12 14:36:04 +02:00
dkboy
9be10f7b79 Add Rating / Genre to Dashboard Suggestions
Add Rating and up to 3 Genres to movie suggestions, to avoid constantly
jumping through to IMDB site.
2013-07-12 21:49:24 +12:00
Ruud
1f35d0ec2f Remove debug print 2013-07-11 17:36:27 +02:00
Ruud
9fcf36a2ff Add WEB-DL and WEB-Rip. fix #1913 2013-07-11 17:34:55 +02:00
Ruud
30f5a66487 AwesomeHD: Log wrong passkey. fix #1912 2013-07-11 15:24:20 +02:00
Ruud
60e0ad1f5d Add Windows Media Center / Explorer folder.jpg creation. closes #1932 2013-07-11 15:05:08 +02:00
Ruud
ed60b4670e Move root creation to metadata base 2013-07-11 15:04:39 +02:00
Ruud
318daaf083 Cleanup BitSoup 2013-07-09 23:31:43 +02:00
Ruud
182987218b Merge branch 'develop' of git://github.com/dkboy/CouchPotatoServer into dkboy-develop 2013-07-09 23:13:15 +02:00
Ruud
5ff8c7302f Sabnzbd prio description 2013-07-09 23:08:33 +02:00
Ruud
398712403b Merge branch 'develop' of git://github.com/gthicks/CouchPotatoServer into gthicks-develop 2013-07-09 23:04:28 +02:00
Ruud
63f72eb23b Merge branch 'refs/heads/seeding' into develop 2013-07-09 22:53:14 +02:00
Ruud
9dea6d7200 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-07-09 22:52:53 +02:00
Ruud
36f63bdf99 Seeding cleanup and better defaults 2013-07-09 22:52:32 +02:00
Ruud
a09fc14625 Twitter DM didn't work 2013-07-09 20:32:29 +02:00
dkboy
71e280238d Fixed missing detail_url 2013-07-10 01:48:11 +12:00
Ruud
e20bb13649 Delete NZBx 2013-07-08 11:31:13 +02:00
Ruud
ed8108a9d8 Remove NZBsRus 2013-07-08 11:30:55 +02:00
Ruud
c0b3c9a330 Make description a bit shorter 2013-07-07 13:44:49 +02:00
Ruud
8a252bff64 Don't use parentdir for tagging 2013-07-07 13:00:38 +02:00
Ruud
d3d3106fc9 Merge branch 'develop_seed' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_seed 2013-07-07 11:37:53 +02:00
dkboy
1ebb09226d Add Bitsoup provider 2013-07-07 14:23:15 +12:00
Ruud
52163428e9 Break if media headers are corrupt. fix #1828 2013-07-07 00:09:22 +02:00
Ruud
da9dda2c2b Make minimal movie automation clearer. fix #1923 2013-07-06 23:39:34 +02:00
Ruud
a4a14cae96 Use forwarded host when provided. fix #1922 2013-07-06 23:26:46 +02:00
Garret
989d6c55c4 Added priority setting for SABnzbd
Includes ability to add nzb to queue paused.
2013-07-06 10:28:32 -07:00
Ruud
3b7376fd18 One up 2013-07-06 01:01:26 +02:00
Ruud
06a211a24a Ignore current suggested results 2013-07-06 00:49:26 +02:00
Ruud
c31b10c798 Ignore current suggested results 2013-07-06 00:49:11 +02:00
Ruud
1c3e6ba930 Ignore current suggested results 2013-07-06 00:24:57 +02:00
Ruud
acda664686 Merge branch 'refs/heads/develop' into desktop
Conflicts:
	version.py
2013-07-05 22:43:54 +02:00
Ruud
55af696b7c Merge branch 'refs/heads/develop'
Conflicts:
	version.py
2013-07-05 22:18:27 +02:00
Ruud
99123ad1c3 Remove version on branch 2013-07-05 22:17:43 +02:00
Ruud
636e9514e8 Merge branch 'refs/heads/develop' 2013-07-05 22:11:10 +02:00
Ruud
cdf9cf5cf4 Yifi: don't search empty results. fix #1900 2013-07-05 21:54:55 +02:00
Ruud
797dedfcbb Remove cdX from subname. fix #1524 2013-07-05 21:28:07 +02:00
Ruud
b61de4866c Make subliminal work with Requests 1.0+ 2013-07-05 20:40:27 +02:00
Ruud
47e649643f Merge branch 'refs/heads/develop'
Conflicts:
	couchpotato/core/helpers/request.py
2013-07-01 23:34:40 +02:00
Ruud
931951ff37 Change default min size for 720p and 1080p 2013-06-30 15:57:58 +02:00
Ruud
6f42b4c316 Don't show coming soon when no dvd release is set 2013-06-30 15:21:06 +02:00
Ruud
58c446de2d Make string param boolean 2013-06-30 15:20:02 +02:00
Ruud
74bf6bc411 Always set info dict on library 2013-06-30 13:17:56 +02:00
Ruud
ad3c24f950 Improved "too early to search" calculations 2013-06-30 13:17:43 +02:00
mano3m
998e487fe8 NZBs are not torrents :) 2013-06-30 10:14:08 +02:00
Ruud
93346b0c63 Properly update release dates 2013-06-30 01:16:13 +02:00
mano3m
7d9920691f Fix uTorrent settings automatically
Note that this might not be the way we want to go?
2013-06-29 22:50:25 +02:00
Ruud
b1942678b4 Add hash and date to update available notification. fix #1883 2013-06-29 22:20:35 +02:00
Ruud
8c77d0d775 Add advanced option to search on launch. fix #1887 2013-06-29 22:03:29 +02:00
Ruud
3e667ee39a Couldn't press letter in movie filter. fix #1888 2013-06-29 21:56:24 +02:00
Ruud
52b2858ac2 Don't enable yifi by default 2013-06-29 21:39:53 +02:00
Ruud
6fcb4c2058 Change default automation interval 2013-06-29 21:07:07 +02:00
mano3m
7411670e22 Added complete download removal to SabNZBd 2013-06-29 10:36:02 +02:00
mano3m
cfd23c395a Add failed download handling to Transmission 2013-06-29 10:23:08 +02:00
Ruud
2e8f670e94 Remove import 2013-06-28 23:32:38 +02:00
mano3m
18a88eab51 Textual change 2013-06-26 20:02:25 +02:00
mano3m
84e9f9794d Add awesomehd torrent provider 2013-06-26 19:53:28 +02:00
mano3m
628c0e5dcc Add yify torrent provider 2013-06-26 19:52:39 +02:00
mano3m
cdee08bd36 Add status colours in dashboard 2013-06-26 19:49:05 +02:00
mano3m
7ed43da425 Also set seeding status in case nothing is done 2013-06-26 19:49:05 +02:00
mano3m
461a0b3645 Seeding support
Design intent:
- Option to turn seeding support on or off
- After torrent downloading is complete the seeding phase starts, seeding parameters can be set per torrent provide (0 disables them)
- When the seeding phase starts the checkSnatched function renames all files if (sym)linking/copying is used. The movie is set to done (!), the release to seeding status.
- Note that Direct symlink functionality is removed as the original file needs to end up in the movies store and not the downloader store (if the downloader cleans up his files, the original is deleted and the symlinks are useless)
- checkSnatched waits until downloader sets the download to completed (met the seeding parameters)
- When completed, checkSnatched intiates the renamer if move is used, or if linking is used asks the downloader to remove the torrent and clean-up it's files and sets the release to downloaded
- Updated some of the .ignore file behavior to allow the downloader to remove its files

Known items/issues:
- only implemented for uTorrent and Transmission
- text in downloader settings is too long and messes up the layout...

To do (after this PR):
- implement for other torrent downloaders
- complete download removal for NZBs (remove from history in sabNZBd)
- failed download management for torrents (no seeders, takes too long, etc.)
- unrar support

Updates:
- Added transmission support
- Simplified uTorrent
- Added checkSnatched to renamer to make sure the poller is always first
- Updated default values and removed advanced option tag for providers
- Updated the tagger to allow removing of ignore tags and tagging when the group is not known
- Added tagging of downloading torrents
- fixed subtitles being leftover after seeding
2013-06-26 19:49:04 +02:00
Ruud
bd56539103 Yifi cleanup 2013-06-24 22:31:50 +02:00
Ruud
9bcd3de69b Merge branch 'develop' of git://github.com/Mochaka/CouchPotatoServer into Mochaka-develop 2013-06-24 22:08:00 +02:00
Ruud
d8f57963a1 NZBIndex: Search for year inside brackets. closes #1874 2013-06-24 22:07:21 +02:00
Ruud
bf59d2f357 Allow unknown keywords for all api calls. fix #1881 2013-06-24 21:22:12 +02:00
Ruud
5328f7fe69 Allow unknown keywords for all api calls. fix #1881 2013-06-24 21:21:49 +02:00
Ruud
fb90f6591b Get array arguments as list. fix #1875 2013-06-24 00:26:31 +02:00
Ruud
9eea42b121 Get array arguments as list. fix #1875 2013-06-24 00:26:00 +02:00
Ruud
d66722e737 Allow non trailing slash API calls 2013-06-23 23:30:47 +02:00
Ruud
374f8ba1de Allow non trailing slash API calls 2013-06-23 23:28:13 +02:00
Ruud
74c984dec3 Send CP headers to suggestion call. fix #1872 2013-06-23 20:44:11 +02:00
Ruud
52ea0215f0 Use done for suggestion also 2013-06-23 19:14:11 +02:00
Ruud
ea3d719b32 Suggest on wrong dev port 2013-06-23 19:09:07 +02:00
Ruud
fd1e655075 Initial suggestion support 2013-06-23 19:07:03 +02:00
Ruud
47d37c2ec9 Merge branch 'refs/heads/develop' 2013-06-23 12:24:01 +02:00
Ruud
9f8d439780 Add limit to CP search api 2013-06-22 17:01:24 +02:00
Aaron Florey
7e1bdc99eb Add Yify Torrent Provider 2013-06-23 00:11:01 +10:00
Ruud
dac36d7f55 IPTorrents ignore empty results 2013-06-22 14:16:02 +02:00
Ruud
9d495a10ec Unicode static folder 2013-06-22 01:38:07 +02:00
Ruud
9bb99319ba SplitString don't clean 2013-06-22 01:37:27 +02:00
Ruud
bc8d8dcd04 Update guessit with unicode fix 2013-06-22 00:34:58 +02:00
Ruud
b2d9a7675d Add version to SAB description 2013-06-22 00:33:12 +02:00
Ruud
2477197656 Don't use unicode in repo 2013-06-22 00:33:00 +02:00
Ruud
171083b2f1 Remove empty values from splitString. fix #1795 2013-06-21 13:44:14 +02:00
Ruud
e592eb969f NZBget error when downloadrate is 0. fix #1849 2013-06-21 13:00:58 +02:00
Ruud
db1493f138 Update pytwitter library. fix #1847 2013-06-21 12:50:58 +02:00
Ruud
57c270f8fa Don't break while sending messages to listeners 2013-06-21 11:32:45 +02:00
Ruud
bfe8bc89c0 IMDB description csv link 2013-06-19 23:39:00 +02:00
Ruud
0a00862495 Show csv imdb export in image 2013-06-19 23:34:58 +02:00
sax
7dd53d93cd Added nzb support to Synology downloader. 2013-06-19 22:43:11 +02:00
theorem21
abe65d4064 Update README.md
added FreeBSD installation instructions.  Requires additional FreeBSD init script (pending creation)
2013-06-19 22:36:56 +02:00
Ruud
4977b31ba6 Use failed status to ignore releases too 2013-06-16 00:21:33 +02:00
Ruud
c1beb85ba5 Add spotter to name for scoring 2013-06-15 23:32:44 +02:00
Ruud
ca9a78eea4 Advanced option for XBMC to only update first in list
Thanks @cliffordwhansen
2013-06-15 22:17:23 +02:00
Ruud
9bf006f4d3 Return if api is not found 2013-06-15 21:43:14 +02:00
Ruud
3bb2a082b7 AwesomeHD provider
Thanks @jrsdead
2013-06-15 20:41:23 +02:00
Ruud
92d11522d2 Use id for HDBits torrent name 2013-06-15 00:06:14 +02:00
Ruud
44cfdc1503 Include full requests lib 2013-06-15 00:04:15 +02:00
Ruud
2fdcbedea8 Use has_key for events check 2013-06-15 00:02:37 +02:00
Ruud
787c7fd966 Codestyle cleanup 2013-06-14 23:35:28 +02:00
sax
09b4ad6937 Fixed torrent support for Synology downloader to work properly with torrent files passed directly by CouchPotato. 2013-06-14 23:31:56 +02:00
sax
580d43aeaf Updated requests library to version 1.2.3 2013-06-14 23:31:47 +02:00
sax
a1a7fec15f Added torrent support for Synology downloader. 2013-06-14 23:31:40 +02:00
Ruud
6dcd74d116 Re-use code for ignore toggle 2013-06-14 23:21:41 +02:00
Ruud
187f5a8a93 Merge branch 'develop' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop 2013-06-14 22:43:19 +02:00
Ruud
2eb938147a Move login downloads to default list item 2013-06-14 22:08:25 +02:00
Ruud
deffb75c14 TorrentByte provider
Thanks @StealthGod
2013-06-14 21:52:15 +02:00
Ruud
f91707bfbe Uncomment debug code 2013-06-14 21:51:41 +02:00
Ruud
8aba7825dc Only show to_early when it has items 2013-06-14 21:24:38 +02:00
Joel Kåberg
b8b5b2fef2 dont spam the log damnit! 2013-06-14 21:17:45 +02:00
Ruud
f4d6d69184 Check if handler has parent 2013-06-14 20:48:20 +02:00
Ben Fox-Moore
a5b1c685e1 Allow IPTorrents provider to read results across multiple pages
Conflicts:
	couchpotato/core/providers/torrent/iptorrents/main.py
2013-06-14 20:47:05 +02:00
Ruud
609805b84d Don't allow keyerror in event 2013-06-14 20:04:49 +02:00
Ruud
00d1da7c01 Bind quickscan to class 2013-06-14 19:51:31 +02:00
Ruud
7335726c7d Add handler aswell 2013-06-14 19:51:20 +02:00
Ruud
02779939f0 Catch im_self error 2013-06-14 19:51:14 +02:00
Ruud
6c6f015f40 Use str not unicode in minification 2013-06-14 19:47:54 +02:00
Ruud
f087d38b86 Cleanup 2013-06-14 17:36:26 +02:00
Ruud
c78957f55c Don't try to run event without beforeCall 2013-06-14 17:24:34 +02:00
Ruud
9ce0c47cd4 More login fixes 2013-06-14 16:03:02 +02:00
clinton-hall
60034f2c96 add category preffered words and partial ignore. 2013-06-14 21:56:26 +09:30
Ruud
c9a4af218e Send port with referer. fix #1827 2013-06-14 13:54:01 +02:00
Ruud
c5c2e61e06 Log startup errors 2013-06-14 11:22:29 +02:00
Ruud
b2930dd6a7 Encode used path on startup. fix #1797 fix #1297 2013-06-14 11:11:34 +02:00
Ruud
4aa6700ceb Update SQLAlchemy 2013-06-14 11:00:06 +02:00
Ruud
267ecfacab Status check in ubuntu init script
Thanks @LeonB
2013-06-14 08:57:37 +02:00
clinton-hall
007597239f add categories 2013-06-14 15:06:59 +09:30
Ruud
5699abf1be Use new KickAss domain 2013-06-14 00:43:03 +02:00
Ruud
a6ccd037e2 Login check for SceneHD 2013-06-14 00:37:55 +02:00
Ruud
009991ce4c Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-06-14 00:04:44 +02:00
Ruud
6ef788a8f4 Check login after 1 hour 2013-06-14 00:03:48 +02:00
Ruud
fa37f7d40a Add some logging to core messaging 2013-06-13 12:15:59 +02:00
Ruud
b195cebac7 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-06-12 23:46:35 +02:00
Ruud
8aeea60888 Update Tornado 2013-06-12 23:42:38 +02:00
Ruud
6e0857c6c1 Remove Flask dependencies 2013-06-12 23:37:08 +02:00
Ruud Burger
260fdbe3b3 Merge pull request #1836 from clinton-hall/develop-extra-logging
Add logging when no rating available
2013-06-12 02:05:21 -07:00
mano3m
2f30c6c781 fix failed issues
As reported in issue #1822 I broke try next release when failed. This
commit adds the failed status to several items.
2013-06-11 21:43:01 +02:00
Clinton Hall
d5b4da655a add logging when no rating available 2013-06-11 21:51:56 +09:30
Ruud
1694ed7758 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-06-10 21:14:02 +02:00
Ruud
ee6cc6d319 PTP torrent id in lowercases 2013-06-10 21:10:58 +02:00
Ruud Burger
7670e320ba Merge pull request #1799 from clinton-hall/develop-audiochannels
add audio channels to renamer
2013-06-10 11:39:34 -07:00
Ruud
15ab745bd0 Don't assume imdb key. fixes #1819 2013-06-08 18:04:46 +02:00
Ruud
7468b33991 Send along ignored movies 2013-06-08 17:32:45 +02:00
Ruud
750e02f38a Close zipfile. fixes #1798 2013-06-08 16:04:14 +02:00
Ruud
e2852407ea One up 2013-06-03 22:22:44 +02:00
Ruud
88e738c6cd Don't show double updater name 2013-06-03 22:22:35 +02:00
Ruud
eaae8bdb0b Merge branch 'refs/heads/develop' into desktop 2013-06-03 22:00:21 +02:00
Ruud
40324ee89f Merge branch 'refs/heads/develop' 2013-06-03 21:59:26 +02:00
Ruud
95d146fea2 Send referer with scheme 2013-06-02 14:22:59 +02:00
Ruud
dc20b68a37 See if need to login on "belongs_to" check. fix #1190 2013-05-31 16:32:03 +02:00
clinton-hall
563e3072a5 add audio channels to renamer 2013-05-31 13:44:40 +09:30
Ruud
b3ba4db00b Append instead of add for subtitle file list 2013-05-29 19:30:51 +02:00
Ruud
9db1f3430e Append instead of add for subtitle file list 2013-05-29 19:30:40 +02:00
Ruud
ec19932eef Merge branch 'refs/heads/develop' 2013-05-29 19:10:50 +02:00
Ruud
a4c1480a1a Force update check from dropdown 2013-05-29 19:03:49 +02:00
Ruud
91e0452320 Torrentshack cleanup 2013-05-29 19:03:28 +02:00
Ruud
ad80ea7885 Merge branch 'develop' of github.com:RuudBurger/CouchPotatoServer into develop 2013-05-29 18:33:36 +02:00
Ruud
daf31870f3 Merge branch 'refs/heads/develop' 2013-05-29 14:51:48 +02:00
Ruud
1c20cda389 Set updater crons on start. 2013-05-29 14:50:22 +02:00
sax
631759d833 Added configuration option to search over scene releases only. Fixed release name issue (removed &shy; element). 2013-05-28 22:52:22 +02:00
sax
ca02c66f26 Fixed login success detection. 2013-05-28 22:52:13 +02:00
sax
3ac095d359 Added support for Torrent Shack provider. 2013-05-28 22:52:05 +02:00
Ruud
35d49f6a5e Merge branch 'refs/heads/develop' 2013-05-28 21:15:04 +02:00
Ruud
e1bc223de0 Get year with default 2013-05-28 21:13:15 +02:00
Ruud
e065ead9b3 Api on subdomain 2013-05-26 21:50:20 +02:00
Ruud
f9471f9b9b HDBits cleanup 2013-05-26 15:16:55 +02:00
Ronald Pompa
2612b50d06 created hdbits torrent provider 2013-05-26 14:30:10 +02:00
Ruud
d9ce2906a0 Fix line ending 2013-05-26 14:24:59 +02:00
Joel Kåberg
b76397f98e addApiView explenation 2013-05-26 14:22:31 +02:00
Joel Kåberg
fcad9e0be5 fireAsync made optional 2013-05-26 14:22:23 +02:00
Ruud
2934347865 Send user-agent on login 2013-05-26 14:20:05 +02:00
Ruud
315f1b0207 Add r6 to quality list 2013-05-23 07:35:11 +02:00
Ruud
965bd79a86 Cleanup import 2013-05-19 23:20:32 +02:00
Ruud
c18563e34b uTorrent cleanup 2013-05-19 23:12:22 +02:00
Ruud
161e0de8d5 Don't need makedir in Transmission 2013-05-19 22:51:32 +02:00
Ruud
40aeca0740 PTP extra scoring 2013-05-19 22:18:01 +02:00
Ruud
63dd7fa7c0 New PTP-config for more accurate hits
Conflicts:
	couchpotato/core/providers/torrent/passthepopcorn/main.py
2013-05-19 22:10:51 +02:00
Ruud
5c0d8a7fef Merge branch 'refs/heads/develop' 2013-05-19 01:19:53 +02:00
Ruud
509b49caf1 Deepcopy and merge movie info results 2013-05-19 01:12:09 +02:00
Ruud
38c51cf79c Import cleanup 2013-05-19 00:57:50 +02:00
Ruud
0b693bba4e Add "on snatch" options to XBMC & Plex notifications
fix #1379
2013-05-19 00:30:56 +02:00
Ruud
1258f34c78 Update counter on movie add / delete
fix #1383
2013-05-19 00:22:44 +02:00
Ruud
510c0d5f56 Remove size_check in quality guess
fix #1393
2013-05-19 00:12:19 +02:00
Ruud
cdb630e580 More touch fixes 2013-05-18 23:59:37 +02:00
Ruud
65fbd38105 Make buttons more touch friendly
fix #1416
2013-05-18 23:37:49 +02:00
Ruud
1570132a55 Don't try to rss parse empty string
fix #1418
2013-05-18 22:52:02 +02:00
Ruud
7b5b748d23 Failed joining unicode and none unicode paths
fix #1447
2013-05-18 22:45:09 +02:00
Ruud
041601c4a5 Change TPB search string
fix #1451
2013-05-18 22:12:43 +02:00
Ruud
f692fd0202 Make sure info isn't not overwriten by none
fix #1724
2013-05-18 21:53:17 +02:00
Ruud
e7b4de56f2 Only run updater if enabled.
fix #1756
2013-05-18 20:30:48 +02:00
Ruud
4a616a0c04 Placeholder styling 2013-05-18 19:40:26 +02:00
Ruud
b2ab114b6d Merge branch 'refs/heads/develop' 2013-05-18 17:29:02 +02:00
Ruud
0814675d2a Remove prints in clientscript 2013-05-18 17:28:25 +02:00
Ruud
13df35462b Force expire database objects 2013-05-17 21:36:23 +02:00
Ruud
899868f51e Don't show empty message when search 2013-05-17 21:32:46 +02:00
Ruud
ee466aebce Easily reset search 2013-05-17 18:32:20 +02:00
Ruud
687ef2662e Switch filter and view 2013-05-17 17:52:19 +02:00
Ruud
5aa29acbd3 Logging fixes 2013-05-17 17:51:15 +02:00
Ruud
a8523e6d01 Merge branch 'refs/heads/develop' 2013-05-17 15:48:12 +02:00
Ruud
1c2b3d063b Empty wanted list background 2013-05-17 15:40:27 +02:00
Ruud
551a000893 Incorrect marking as BD-Rip
Fixes #1643
2013-05-17 15:12:13 +02:00
Ruud
0d82d425cc Show original message when log is failing
closes #1735
2013-05-17 12:41:31 +02:00
Ruud
0e1cea1034 Simplify minifier
fixes #1744
2013-05-17 12:30:48 +02:00
Ruud
2b75153148 Don't limit snatched & wanted
fixes #1747
2013-05-17 12:11:53 +02:00
Ruud
c170615fb3 Ignore temp updater files on cleanup 2013-05-15 14:49:45 +02:00
Ruud
f6e84b6a35 Remove view after update 2013-05-14 00:22:19 +02:00
Ruud
6144f09a1f Make lists of sorted movies files also 2013-05-14 00:15:28 +02:00
Ruud
de142e8050 Goodfilm fixes. closes #1723
Thanks @qooplmao
2013-05-14 00:13:42 +02:00
Ruud
d0c1a119fd Use list for leftover files 2013-05-13 23:48:02 +02:00
Ruud
8fd80d3185 Update instead of extend 2013-05-13 23:44:01 +02:00
Ruud
ae28c82858 Cleanup 2013-05-13 23:27:46 +02:00
Ruud
1766764c7d Skip available movies in "still not available" view. fix #1687 2013-05-13 23:07:57 +02:00
Ruud
129f8d72bd API movie.list didn't return proper total. fix #1727 2013-05-13 21:37:54 +02:00
Ruud
f946389d60 Merge branch 'refs/heads/develop' 2013-05-11 00:08:51 +02:00
Ruud
7314b5ecae Run async event in thread so the on_complete is fired properly 2013-05-11 00:04:59 +02:00
Ruud
7b0806355f Thumbnail list action position 2013-05-10 19:36:48 +02:00
Ruud
49cf72e058 Load notification after window load 2013-05-10 18:28:52 +02:00
Ruud
a11cad619d Don't unicode css 2013-05-10 18:06:10 +02:00
Ruud
c1d35e8a57 Stop blinking text when scrolling in webkit 2013-05-10 15:19:39 +02:00
Ruud
fede348fbd Icon replacements 2013-05-10 15:14:47 +02:00
Ruud
f3c60e8fa6 Added TPB proxies 2013-05-10 12:00:12 +02:00
Ruud
00e53439ed Don't wait between xbmc calls 2013-05-10 00:07:06 +02:00
Ruud
368fced0c4 Cancel autocomplete searches when starting new one 2013-05-10 00:03:42 +02:00
Ruud
666771fb0f Notification is empty styling 2013-05-10 00:02:11 +02:00
Ruud
9e3f978677 Styling fixes 2013-05-09 23:36:54 +02:00
Ruud
f467d1c4f7 Dashboard thumbnails height not set properly. fix #1698 2013-05-08 12:54:34 +02:00
Ruud
d8fc9d937e Filmweb userscript fix 2013-05-08 12:47:17 +02:00
Ruud
821f68909d One up 2013-05-05 21:19:10 +02:00
Ruud
2b8dfed475 Merge branch 'refs/heads/master' into desktop
Conflicts:
	version.py
2013-05-05 20:31:28 +02:00
Ruud
0a749ce913 Merge branch 'refs/heads/develop' 2013-05-05 20:24:40 +02:00
Ruud
e6db505cf7 Restart notification request every 2 minutes 2013-05-05 20:15:23 +02:00
Ruud
9e8d6aaaa1 Delay notification start more on mobile 2013-05-05 20:14:54 +02:00
Ruud
e814b551b4 Delay fade loader after refreshing movie 2013-05-05 17:41:16 +02:00
Ruud
080da48223 Only attach imdb url when available 2013-05-05 17:28:02 +02:00
Faryn
897330e646 Include IMDb link to movie in Pushover notifications 2013-05-05 16:54:11 +02:00
Ruud
c4c7b5b1a9 Settings styling issues 2013-05-05 14:09:11 +02:00
Ruud
b90861bc63 Show if movie is in library again on search 2013-05-05 13:35:49 +02:00
Ruud
6d1297a85f Don't show double message when refreshing movie 2013-05-05 13:31:26 +02:00
Ruud
dfd2c33657 Extend files, not append 2013-05-05 10:15:19 +02:00
Ruud
f5af551325 Extend files, not append 2013-05-05 10:14:10 +02:00
Ruud
7aad27c3d2 Last message check 0 after first message 2013-05-03 23:05:17 +02:00
Ruud
60ff3b08d4 Last message check 0 after first message 2013-05-03 23:04:52 +02:00
Ruud
7a5588d5de Merge branch 'refs/heads/develop' 2013-05-03 22:51:35 +02:00
Ruud
56b6fbbe7f Backtotop button over log pagination 2013-05-03 22:51:06 +02:00
Ruud
46c408befb SImplify thumbslist 2013-05-03 22:29:33 +02:00
Ruud
6f808fc25a Only fire hide-scrollbar function on smaller resolutions 2013-05-03 22:27:25 +02:00
Ruud
4cba44fbb1 API notifications 2013-05-03 14:07:23 +02:00
Ruud
91c45bad71 Move source url to api 2013-05-02 15:13:57 +02:00
Ruud
a30caefc04 Cleanup wizard 2013-05-02 12:38:28 +02:00
Ruud
eb20fda878 Position checkbox 2013-05-02 12:37:54 +02:00
Ruud
4a5aa02e6c Remove userscript detection 2013-05-02 11:47:32 +02:00
Ruud
25b37ad915 Criticker userscript support 2013-05-02 11:32:26 +02:00
Ruud
bbcceb982a Reverse before merging dicts 2013-05-01 23:46:35 +02:00
Ruud
c41f5eb84d Give userscript window a proper height 2013-05-01 23:32:17 +02:00
Ruud
89dc9e90b2 Logo border outside header 2013-05-01 19:02:01 +02:00
Ruud
39b1dedf12 Updater message at the bottom 2013-05-01 18:30:52 +02:00
Ruud
be28820fb2 List and dropdown styling issues 2013-05-01 17:48:59 +02:00
Ruud
0654c8cf07 More header margin on mobile 2013-04-30 22:32:13 +02:00
Ruud
2f5cb81029 Optimize initial requests 2013-04-30 22:28:15 +02:00
Ruud
067d6e8514 Put link and symlink in helpers 2013-04-30 19:32:11 +02:00
mano3m
42e19e1e2b Replace linktastic with simple ctypes
Linktastic calls the command line interpreter to do linking. This
solution calls the windows API directly. This is faster and cleaner, but
most important of all: it doesn't cause a command window to popup every
time a link is made. This popup asks window focus and thus interrupts a
movie you are watching!!

Hopefully this also works on non-windows systems. I am unable to test
this so please let me know :)
2013-04-30 19:24:28 +02:00
Ruud
6b846b91b4 Thumblist title and action positioning 2013-04-30 19:18:59 +02:00
Ruud
5838a41813 Newznab, not enough api_keys. fix #1677 2013-04-30 18:20:33 +02:00
Ruud
3cd5513c0c Point source updater to new url 2013-04-30 18:11:44 +02:00
Ruud
bfdc8d1053 Don't remove version file 2013-04-30 18:04:58 +02:00
Ruud
30ec8216e1 Minify on backend 2013-04-30 13:17:03 +02:00
Ruud
12c3fc6ce3 Don't reverse result order 2013-04-30 11:13:47 +02:00
Ruud
7b3a1409d5 Force thumbnail view on home 2013-04-30 10:32:22 +02:00
Ruud
924bed06cb Rewrite font css 2013-04-30 00:29:25 +02:00
Ruud
8b0aa7a6b3 Initial mobile styling 2013-04-30 00:24:56 +02:00
Ruud
367c385fff Lowercase variables 2013-04-27 11:12:15 +02:00
Ruud
840efb1571 Merge branch 'develop' of git://github.com/clinton-hall/CouchPotatoServer into clinton-hall-develop 2013-04-27 11:09:30 +02:00
Ruud
9ba19d27a6 Combine status calls 2013-04-27 11:04:19 +02:00
Ruud
1d603e1ec2 Simplify event handling 2013-04-27 10:48:47 +02:00
Ruud
7818b43045 Release files add in bulk 2013-04-27 10:08:03 +02:00
Ruud
3936100000 Cache status calls 2013-04-27 09:42:29 +02:00
Clinton Hall
1a846b04ee Fix minor errors. Fix #1666 2013-04-27 11:26:52 +09:30
Ruud
384a355a53 Check release type by info 2013-04-26 23:48:54 +02:00
Ruud
58ad5c3938 Merge branch 'develop_symlink' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_symlink 2013-04-26 19:59:47 +02:00
mano3m
6ee68d1418 Fix getDownloadInfo 2013-04-26 19:37:47 +02:00
Ruud
6e45c14ac5 Don't download html files as trailers. fixes #1658 2013-04-26 18:49:39 +02:00
Ruud
e786c9c79a Use sets for ignored words. fixes #1657 2013-04-26 18:11:18 +02:00
Ruud
518ac16814 Use lowercase variable 2013-04-26 16:44:02 +02:00
Ruud
1d07eafa83 Merge branch 'dev-nzbget' of git://github.com/clinton-hall/CouchPotatoServer into clinton-hall-dev-nzbget 2013-04-26 16:42:24 +02:00
Ruud
1600b6d0ea NZBGet set advanced and description 2013-04-26 16:36:42 +02:00
Ruud
6de3a7246e Merge branch 'develop_nzbgetusername' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_nzbgetusername 2013-04-26 16:29:36 +02:00
Ruud
cbd29df52a Update to_go even if movie isn't found in manage. 2013-04-26 16:27:36 +02:00
clinton-hall
92998bafc8 write unique id to nzbget params
My mistake. Fixed now.

Yeah... sorry ;)
This does work for check_snatched... Marks as busy, or failed etc.

keep consistent release table format

fix check_snatched

correctly parse the NZBGet Parameters and Pass status.downloader

remove downloader and fix id

My mistake. Fixed now.

Yeah... sorry ;)
This does work for check_snatched... Marks as busy, or failed etc.

keep consistent release table format

fix check_snatched

correctly parse the NZBGet Parameters and Pass status.downloader

remove downloader and fix id
2013-04-26 16:03:20 +09:30
mano3m
1022753213 Add username to nzbget downloader
For raspberry pi a different username than normal is required. Fixes
#1652
2013-04-24 22:19:34 +02:00
mano3m
b85942989d Standardise failed status
apply the failed status in case of a manual re-add and automatic try
next release
2013-04-21 23:46:21 +02:00
Clinton Hall
f2f43a2231 Wont delete the "from" (sub)folders, "movie" folder or "destination" folder at this time. These should be deleted after the renaming... at this stage we should only be deleting any "older releases" 2013-04-20 11:42:53 +09:30
mano3m
2979a8edec Cleanup 2013-04-19 21:28:53 +02:00
Ruud
0e90739786 Update to Tornado 3.0 2013-04-19 14:49:27 +02:00
mano3m
185a530b59 only link for torrents not nzbs 2013-04-17 21:19:50 +02:00
Ruud
4f6b31d14a Add login check to torrentleech. closes #1635 2013-04-16 21:15:38 +02:00
Ruud
f1dde5c925 Merge branch 'refs/heads/develop' 2013-04-14 11:09:32 +02:00
Ruud
64afa3701a Add headers for IPTorrents. fix #1558
Thanks @got3nks
2013-04-14 11:08:50 +02:00
Ruud
cb0b6614c6 move_symlink proper function 2013-04-12 21:14:45 +02:00
Kyle Klein
177063d39c Add File Action Move & Sym link 2013-04-12 21:10:30 +02:00
Ruud
be595aba91 Loading in movie lists 2013-04-12 20:57:00 +02:00
Ruud
66d9d853af NZB ids not persistent in new sessions 2013-04-08 21:44:36 +02:00
Ruud
95a68af795 Debug logs 2013-04-08 21:40:17 +02:00
Ruud
c1937ea71f linktastic clenaup 2013-04-08 21:10:46 +02:00
Ruud
a7bd8c822a Simplify nonblocking requests 2013-04-08 20:55:42 +02:00
Ruud
0eff4f0096 Merge branch 'master' of github.com:RuudBurger/CouchPotatoServer 2013-04-05 23:59:56 +02:00
Ruud
4d7fa08805 Merge branch 'refs/heads/develop' 2013-04-05 23:57:54 +02:00
Ruud
5fd4312ff8 Use simpler file.cache 2013-04-05 23:50:28 +02:00
Ruud
a600430be4 file_action cleanup
tag ignored/failed/renamed with custom file
2013-04-05 21:56:33 +02:00
Ruud
f77b598899 Merge branch 'links' of git://github.com/jkaberg/CouchPotatoServer into jkaberg-links 2013-04-05 18:00:17 +02:00
Ruud
ac045539d1 Don't move or delete anything in status check 2013-04-05 17:51:45 +02:00
Ruud
5b0fa9054b Put renaming started lower 2013-04-05 17:51:19 +02:00
Joel Kåberg
3c2a00b17b add copy and move to file actions 2013-04-05 14:21:15 +02:00
Joel Kåberg
47ddf31f76 ruud was bulling me ;) 2013-04-05 14:07:10 +02:00
Joel Kåberg
57ae06e139 initial link support 2013-04-05 13:48:09 +02:00
Ruud
7f4373e000 Traceback import missing 2013-04-05 12:08:45 +02:00
mano3m
63609bb52c Fix Transmission 2013-04-05 00:09:23 +02:00
Ruud
f0af184262 Merge branch 'refs/heads/develop' 2013-04-02 11:32:20 +02:00
Ruud
72cc3576d3 Use @mano3m code to check download_info 2013-04-01 22:51:34 +02:00
Ruud
3fe7d2ea15 Download id cleanup 2013-04-01 21:18:29 +02:00
Ruud
8eed54f1f7 Merge branch 'develop_dwnlodid_complete' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_dwnlodid_complete
Conflicts:
	couchpotato/core/downloaders/transmission/__init__.py
	couchpotato/core/downloaders/transmission/main.py
2013-04-01 20:46:36 +02:00
Ruud
c7ee8a0635 Don't use object in correctmovie event 2013-04-01 20:11:49 +02:00
Ruud
33a6a7d3a0 CP Provider API Identifier 2013-04-01 17:31:04 +02:00
mano3m
2851781a72 Move around items between scanner and renamer 2013-04-01 13:03:05 +02:00
mano3m
45b9919f67 Added some debugging info 2013-04-01 10:16:30 +02:00
Ruud
207e846ae6 Check crons after saving settings. fix #1556 & #1557 2013-04-01 00:06:15 +02:00
Ruud
a83c276aa2 Schedule start normal 2013-03-31 23:50:47 +02:00
Ruud
4cdb99a383 Add dvdscreen to screener quality. fix #1555 2013-03-31 22:42:27 +02:00
mano3m
8fe60a893c Update of uTorrent 2013-03-31 18:53:19 +02:00
Ruud
0c44c48628 Notification test failed. closes #1561
Thanks @FredrikWendt
2013-03-31 11:12:37 +02:00
mano3m
6a18e546ca Add and make use of renamer.scanfolder in downloaders
This is the next step in closing the loop between the downloaders and CPS. The download_id and folder from the downloader are used to find the downloaded files and start the renamer. This is done by adding an additional API call: renamer.scanfolder.

I tested this for SabNZBd only (!) and everything works as expected.

I also added transmission with thanks @manusfreedom for setting this up in f1cf0d91da. @manusfreedom, please check if this works as expected. Note that transmission now has a feature which is not in the other torrent providers: it waits until the seed ratio is met and then removes the torrent. I opened a topic in the forum to discuss how we want to deal with torrents: https://couchpota.to/forum/thread-1704.html
2013-03-31 10:49:40 +02:00
Ruud
4cedccb178 Move over html template 2013-03-29 12:39:15 +01:00
Ruud
eab9a735a9 Show real transmission error. 2013-03-29 12:39:06 +01:00
Ruud
1df05cf344 Don't use directory when it's empty. fix #1448 2013-03-28 21:51:42 +01:00
Ruud
843ff0eabc Add some default Newznab providers 2013-03-26 22:02:43 +01:00
Ruud
5a23be2224 Merge branch 'refs/heads/develop' 2013-03-26 21:42:25 +01:00
Ruud
45c8817c62 Always show release helper. fix #1549 2013-03-26 21:42:07 +01:00
Ruud
7f87b255f9 Merge branch 'refs/heads/develop' 2013-03-26 21:10:27 +01:00
Ruud
665c84c6de Ignore done_status on re-add. fix #1547 2013-03-25 23:00:12 +01:00
Ruud
b91a077c91 Easier "get next best release" buttons 2013-03-25 22:54:22 +01:00
Ruud
59b924efe7 Tweak button styling 2013-03-25 22:53:48 +01:00
Ruud
730718a396 Link to country codes for subtitles 2013-03-25 12:24:54 +01:00
Ruud
3c0edc0d6a Clean out more words before search.
Thanks to @jkaberg
2013-03-25 12:09:32 +01:00
Ruud
7c234ab7e9 Simplify notification providers 2013-03-25 11:01:31 +01:00
Ruud
b82319cb54 Use existing Trakt auth settings and other cleanup 2013-03-25 10:48:59 +01:00
Ruud
6685495400 Merge branch 'trakt_notify' of git://github.com/EchelonFour/CouchPotatoServer into EchelonFour-trakt_notify 2013-03-25 09:59:51 +01:00
Ruud
b216589e88 Use normal ID, not extended. fix #1543 2013-03-25 09:58:39 +01:00
Ruud
744aa153f6 Merge branch 'develop_720p1080p' of git://github.com/mano3m/CouchPotatoServer into mano3m-develop_720p1080p 2013-03-25 09:11:23 +01:00
Ruud
67612fce98 Use label on combined setting 2013-03-24 20:27:38 +01:00
Ruud
72ba1a173c Make letterboxd multi-watchlist 2013-03-24 20:18:49 +01:00
Ruud
989e217775 Merge branch 'letterboxd-importer' of git://github.com/himynameisjonas/CouchPotatoServer into himynameisjonas-letterboxd-importer 2013-03-24 19:57:15 +01:00
Ruud
b0d556c8eb Try and find release status by ID instead of name. closes #1511
Thanks to @mano3m
2013-03-24 19:54:11 +01:00
Jonas Forsberg
1a54d8fad9 Letterboxd wishlist importer 2013-03-23 11:49:40 +01:00
mano3m
f9ace29cab Fix the names of quality identifiers
720P should be 720p, 1080P should be 1080p. Ref
http://en.wikipedia.org/wiki/720p and http://en.wikipedia.org/wiki/1080p

Note that this update only changes anything for new databases. For as
far as I can see the change for existing databases is minimal.
2013-03-23 00:40:38 +01:00
Ruud
a97570027d Prevent null in boolean column. fix #1374 2013-03-22 22:31:49 +01:00
Ruud
de36faa0a7 DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS DEVELOPERS 2013-03-22 21:59:28 +01:00
Ruud
19641bd897 Give the scanner some rest when to many threads 2013-03-20 22:47:14 +01:00
Ruud
2c64641a1b Prepend lists when merging event objects 2013-03-20 22:45:31 +01:00
Ruud
5ac1118db3 Merge branch 'refs/heads/develop' 2013-03-20 20:32:57 +01:00
Ruud
717b88b5fe Force pushalot image refresh 2013-03-20 20:30:34 +01:00
Ruud
158a7fc311 Optimize PNGs 2013-03-20 19:47:48 +01:00
Ruud
2c46279617 Merge branch 'refs/heads/develop' 2013-03-20 19:37:15 +01:00
Ruud
b843d5f13b General notification icons 2013-03-20 19:35:11 +01:00
Ruud
4aff3f0495 Add score per provider. closes #1512 2013-03-20 08:50:43 +01:00
Ruud
4406f133b9 CAPITALIZE MOTHAFAAACKAAH! 2013-03-19 23:28:51 +01:00
Ruud
572dfd529e Shorten automation description 2013-03-19 23:26:55 +01:00
mano3m
2cb6ddfe9a Add automation genre checking
With this commit you can set requirements to the genres of movie
automation downloading. Required sets e.g. Action&Crime and/or ignored
sets e.g. Romance&Comedy.
2013-03-19 23:16:25 +01:00
Ruud
250236bd25 Option to force search 2013-03-19 23:12:47 +01:00
Prinz23
7f24563bba Add Advanced Option to deactivate "Too early to search for ..."
Advanced Option: Check Released
2013-03-19 22:58:18 +01:00
Ruud
5d6a9ad2d0 Merge branch 'refs/heads/develop' 2013-03-19 22:55:39 +01:00
Ruud
0115bf254e Force default profile on movies without profile. fix #1523 2013-03-19 22:55:10 +01:00
Ruud
607b5ea766 Run exe after install 2013-03-19 21:22:07 +01:00
Ruud
88579cd71a One up 2013-03-19 20:52:07 +01:00
Ruud
6c57316ce6 Use https for changelog 2013-03-19 20:46:00 +01:00
Ruud
6702683da3 Merge branch 'refs/heads/develop' into desktop 2013-03-19 20:34:38 +01:00
Ruud
b9c2b42725 Merge branch 'refs/heads/develop' 2013-03-19 20:28:46 +01:00
Ruud
e54928720a Don't download same quality twice. fix #1519 2013-03-19 20:24:48 +01:00
Ruud
f8f22cdef7 Description typo 2013-03-19 00:25:03 +01:00
Ruud
1ed58586a1 Force install install in AppData
Add images to installer
2013-03-18 23:56:54 +01:00
Ruud
e694276a8d Save view to different cookie so people don't have to reset. 2013-03-18 22:02:47 +01:00
Ruud
a8369b4e93 Merge branch 'refs/heads/develop'
Conflicts:
	version.py
2013-03-18 21:57:58 +01:00
Ruud
73b7bcc6ce Force dashboard view 2013-03-18 21:56:50 +01:00
Ruud
f08ccd4fd8 One up installer 2013-03-17 22:34:04 +01:00
Ruud
312562a9f5 Merge branch 'refs/heads/develop' into desktop
Conflicts:
	version.py
2013-03-17 16:42:53 +01:00
Ruud
fab8e66fe1 One up
Conflicts:
	version.py
2013-03-17 16:40:22 +01:00
Ruud
1cd8040692 One up 2013-03-17 16:39:09 +01:00
Ruud
4db1b57c70 Merge branch 'refs/heads/develop' 2013-03-17 16:31:31 +01:00
Ruud
7268e02386 zindex fixes & empty home element 2013-03-17 15:50:45 +01:00
Ruud
805aa3ca9f Split query to fix title bug. fix #1510 2013-03-17 15:14:04 +01:00
Ruud
29cb34551c Hide title and description by default 2013-03-17 14:42:24 +01:00
Ruud
d267be4455 Only sleep on 404 when not in dev mode 2013-03-17 14:10:29 +01:00
Ruud
92f4ade371 Save the last view properly 2013-03-17 12:55:07 +01:00
Ruud
9235eda73b Reverse merging using priority 2013-03-17 12:42:14 +01:00
Ruud
1fe23afd1b Don't mark first title default 2013-03-17 11:40:22 +01:00
Ruud
09637c3069 Revert "Search priority"
This reverts commit 2cafd509fc.
2013-03-17 11:39:54 +01:00
Ruud
2cafd509fc Search priority 2013-03-17 02:01:48 +01:00
Ruud
62cc570ab2 Mask zindex fix 2013-03-17 01:52:27 +01:00
Ruud
1ec9370e68 Make sure to set default title on refresh. fix #1436 2013-03-17 01:40:36 +01:00
Ruud
5b4c60ecba Optimize dashboard.soon with joins 2013-03-17 01:14:15 +01:00
Ruud
7b7488ece8 Dashboard split
Do more with snatched and other statusses
2013-03-16 22:23:11 +01:00
Ruud
4ba7ff9f27 Search mask fix 2013-03-16 22:21:33 +01:00
Ruud
df2d1aca4b Allow email notification to send to multiple addresses 2013-03-16 15:27:38 +01:00
Ruud
4fcba70c9a Cleanup dashboard snatched movies 2013-03-16 11:53:55 +01:00
Ruud
d0fc20ca6e Add last_edit to movie and release tables 2013-03-16 11:51:46 +01:00
Ruud
9402b54f9b Force to wanted after wizard 2013-03-15 16:38:26 +01:00
Ruud
f0e7795b9b Ubuntu init script /etc/default 2013-03-15 14:33:34 +01:00
dfiore1230
bba18d8bc9 added the ability to source /etc/default/couchpotato file
added the ability to source /etc/default/couchpotato file by testing for file existence and source when available

added lines 39 - 45
2013-03-15 13:58:33 +01:00
Ruud
0494e5fc8f Cleanup pushalot notifier 2013-03-13 22:08:51 +01:00
Travis La Marr
df1b46272d Pushalot notifier for Windows Phone 7/8 and Windows 8 2013-03-13 21:34:18 +01:00
Ruud
b06dbd3069 Merge branch 'refs/heads/develop' 2013-03-12 21:12:18 +01:00
Ruud Burger
ed068f09b0 Only chown PID file 2013-03-12 10:40:20 +01:00
Ruud Burger
5e852d05ee Only remove PID file 2013-03-12 08:29:29 +01:00
Ruud Burger
d111393bd6 Remove PID path 2013-03-12 08:21:23 +01:00
Ruud
f84aa8c638 Merge branch 'refs/heads/develop' 2013-03-09 18:15:26 +01:00
Ruud
89bff73431 Decode torrent hash for magnets also 2013-03-09 18:15:06 +01:00
Ruud
8e07dfc730 Merge branch 'refs/heads/develop' 2013-03-08 14:46:01 +01:00
Ruud
cd16dddf13 Make sure to use the correct hash for utorrent 2013-03-08 14:45:32 +01:00
Ruud
25605c45b9 IPTorrent download url fix
Thanks seedboy
2013-03-08 14:28:46 +01:00
Ruud
b6d0d54609 Add params to cache_key 2013-03-04 23:11:40 +01:00
Ruud
98981dac27 Suggestions 2013-03-04 23:11:26 +01:00
Ruud
ddf03cbcf2 Diskspace event 2013-03-04 23:11:20 +01:00
Ruud
1e1abf407c Dashboard 2013-03-04 23:11:13 +01:00
Ruud
1267cdac4d Remove print from TPB provider 2013-02-24 00:18:10 +01:00
Ruud
05bcee12ae No need for folder for pid file 2013-02-24 00:17:57 +01:00
Ruud
fc3f15e0cf Remove dots and spaces from left movie name. fixes #1428 2013-02-23 17:45:27 +01:00
Ruud
0a7765f639 uTorrent status support. closes #1391
Thanks to Stourwalk
2013-02-23 16:36:12 +01:00
Ruud
c214458770 IPTorrents, don't continue if nothing found. fixes #1423 2013-02-23 16:09:53 +01:00
Ruud
bfe501c84a Better XBMC notification image. close #1427 2013-02-23 16:01:20 +01:00
Ruud
e034465df8 Show newznab name in release list. fix #1400 2013-02-23 15:58:36 +01:00
Ruud
a7b78d4131 Tornado update 2013-02-22 23:20:16 +01:00
Ruud
3eed34c710 Gzip Tornado response 2013-02-22 22:56:08 +01:00
Ruud
9cb3bef156 Fallback to non-minified scripts 2013-02-22 21:23:38 +01:00
Ruud
46c7e3fbed IPTorrent support. closes #1411
Thanks to @seedboy
2013-02-15 21:36:59 +01:00
Ruud
a49a00a25f Host to 0.0.0.0 2013-02-14 23:02:44 +01:00
Ruud
eed0382b41 Host to 0.0.0.0 2013-02-14 23:01:34 +01:00
Ruud
673843fb66 Merge branch 'refs/heads/develop' 2013-02-12 23:25:11 +01:00
Ruud
4e45c94fc3 Renamer NTFS permission fix #778 2013-02-12 23:23:18 +01:00
Ruud
0a11dc6673 Set file permissions on .nzb or torrent file. closes #1362
Thanks clinton
2013-02-12 23:12:40 +01:00
Ruud
4ede2c20a1 Goodfilm automation provider. closes #1366 2013-02-12 23:10:34 +01:00
Ruud
af0cf523e3 Fedora init script. closes #1399 2013-02-12 22:56:02 +01:00
Ruud
3908e00650 Stop progress search on fail. fix #1409 2013-02-12 22:49:44 +01:00
Ruud
f9bdf6da1c Send correct headers to SABNZBd. fix #1406 2013-02-12 22:42:26 +01:00
Ruud
811f35b028 Merge branch 'refs/heads/develop' 2013-02-04 23:11:39 +01:00
Ruud
87cdf9222d Hide test notification button 2013-02-04 23:05:21 +01:00
Ruud
2ca2cc9597 Don't fire openpage twice on start 2013-02-04 22:36:22 +01:00
Ruud
edb232df60 Don't fire progress untill other request ended 2013-02-04 22:35:25 +01:00
Ruud
af113c0ffd Minifier 2 2013-02-04 21:59:12 +01:00
Ruud
856b495995 Minifier 2013-02-04 21:48:02 +01:00
Ruud
a56bbf0b3b CP API cleanup 2013-02-03 21:50:29 +01:00
Ruud
4b54113f08 Use CP api for movie check 2013-02-03 18:20:11 +01:00
Ruud
52371b7705 Daemonize cleanup 2013-02-02 23:16:02 +01:00
Ruud
629bead919 Raise current exception 2013-02-02 12:02:54 +01:00
Ruud
c7cd72787f Ignore extracted folder. fix #1369 2013-02-02 11:49:12 +01:00
Ruud
ec6e2c240f Merge branch 'refs/heads/develop' 2013-01-28 23:21:52 +01:00
Ruud
9e260a89af One up 2013-01-26 14:51:39 +01:00
Ruud
3187a0f820 Merge branch 'refs/heads/develop' 2013-01-25 15:52:54 +01:00
Ruud
f86b9299c4 Merge branch 'refs/heads/develop' 2013-01-25 14:21:11 +01:00
Ruud
d27d0abeb0 Merge branch 'refs/heads/develop'
Conflicts:
	version.py
2013-01-24 23:35:37 +01:00
Ruud
7c59348138 Merge branch 'refs/heads/develop' 2013-01-23 22:54:29 +01:00
Ruud
ab53f44157 Remove non-int backup folders. closes #1298 2013-01-23 22:23:52 +01:00
Ruud
b35f325d94 Merge branch 'refs/heads/develop' 2013-01-23 22:16:26 +01:00
Ruud
393c14de54 Urlencode spotweb id. fix #1213 2013-01-07 23:12:08 +01:00
Ruud
bff17c0b95 Merge branch 'refs/heads/develop' 2013-01-07 22:40:37 +01:00
Ruud
d172828ac5 Merge branch 'refs/heads/develop' 2013-01-02 14:12:07 +01:00
Ruud
9500ac73fc Link to downloaders 2013-01-02 13:52:44 +01:00
Ruud
e2cf7e4421 Merge branch 'refs/heads/develop' 2013-01-02 13:44:34 +01:00
Frank Fenton
c087a6b49b Add Trakt notification 2012-09-18 02:44:38 +10:00
1214 changed files with 96421 additions and 173408 deletions

View File

@@ -1,4 +1,5 @@
#!/usr/bin/env python
from __future__ import print_function
from logging import handlers
from os.path import dirname
import logging
@@ -18,7 +19,12 @@ base_path = dirname(os.path.abspath(__file__))
sys.path.insert(0, os.path.join(base_path, 'libs'))
from couchpotato.environment import Env
from couchpotato.core.helpers.variable import getDataDir
from couchpotato.core.helpers.variable import getDataDir, removePyc
# Remove pyc files before dynamic load (sees .pyc files regular .py modules)
removePyc(base_path)
class Loader(object):
@@ -28,7 +34,7 @@ class Loader(object):
# Get options via arg
from couchpotato.runner import getOptions
self.options = getOptions(base_path, sys.argv[1:])
self.options = getOptions(sys.argv[1:])
# Load settings
settings = Env.get('settings')
@@ -49,7 +55,7 @@ class Loader(object):
# Create logging dir
self.log_dir = os.path.join(self.data_dir, 'logs');
if not os.path.isdir(self.log_dir):
os.mkdir(self.log_dir)
os.makedirs(self.log_dir)
# Logging
from couchpotato.core.logger import CPLog
@@ -66,10 +72,11 @@ class Loader(object):
signal.signal(signal.SIGTERM, lambda signum, stack_frame: sys.exit(1))
from couchpotato.core.event import addEvent
addEvent('app.after_shutdown', self.afterShutdown)
addEvent('app.do_shutdown', self.setRestart)
def afterShutdown(self, restart):
def setRestart(self, restart):
self.do_restart = restart
return True
def onExit(self, signal, frame):
from couchpotato.core.event import fireEvent
@@ -97,7 +104,6 @@ class Loader(object):
# Release log files and shutdown logger
logging.shutdown()
time.sleep(3)
args = [sys.executable] + [os.path.join(base_path, os.path.basename(__file__))] + sys.argv[1:]
subprocess.Popen(args)
@@ -132,14 +138,15 @@ if __name__ == '__main__':
pass
except SystemExit:
raise
except socket.error as (nr, msg):
except socket.error as e:
# log when socket receives SIGINT, but continue.
# previous code would have skipped over other types of IO errors too.
nr, msg = e
if nr != 4:
try:
l.log.critical(traceback.format_exc())
except:
print traceback.format_exc()
print(traceback.format_exc())
raise
except:
try:
@@ -148,7 +155,7 @@ if __name__ == '__main__':
if l:
l.log.critical(traceback.format_exc())
else:
print traceback.format_exc()
print(traceback.format_exc())
except:
print traceback.format_exc()
print(traceback.format_exc())
raise

View File

@@ -81,7 +81,7 @@ class TaskBarIcon(wx.TaskBarIcon):
webbrowser.open(url)
def onSettings(self, event):
url = self.frame.parent.getSetting('base_url') + '/settings/'
url = self.frame.parent.getSetting('base_url') + 'settings/about/'
webbrowser.open(url)
def onTaskBarClose(self, evt):
@@ -127,7 +127,7 @@ class WorkerThread(Thread):
# Get options via arg
from couchpotato.runner import getOptions
args = ['--quiet']
self.options = getOptions(base_path, args)
self.options = getOptions(args)
# Load settings
settings = Env.get('settings')
@@ -154,6 +154,7 @@ class WorkerThread(Thread):
pass
self._desktop.frame.Close()
self._desktop.ExitMainLoop()
class CouchPotatoApp(wx.App, SoftwareUpdate):
@@ -162,12 +163,13 @@ class CouchPotatoApp(wx.App, SoftwareUpdate):
events = {}
restart = False
closing = False
triggered_onClose = False
def OnInit(self):
# Updater
base_url = 'http://couchpota.to/updates/%s/' % VERSION
self.InitUpdates(base_url, base_url + 'changelog.html',
base_url = 'https://api.couchpota.to/updates/%s'
self.InitUpdates(base_url % VERSION + '/', 'https://couchpota.to/updates/%s' % 'changelog.html',
icon = wx.Icon('icon.png'))
self.frame = MainFrame(self)
@@ -197,7 +199,9 @@ class CouchPotatoApp(wx.App, SoftwareUpdate):
self.closing = True
self.frame.tbicon.onTaskBarClose(event)
onClose = self.events.get('onClose')
onClose = self.events.get('onClose')
if onClose and not self.triggered_onClose:
self.triggered_onClose = True
onClose(event)
def afterShutdown(self, restart = False):

View File

@@ -1,4 +1,4 @@
CouchPotato Server
CouchPotato
=====
CouchPotato (CP) is an automatic NZB and torrent downloader. You can keep a "movies I want"-list and it will search for NZBs/torrents of these movies every X hours.
@@ -7,7 +7,7 @@ Once a movie is found, it will send it to SABnzbd or download the torrent to a s
## Running from Source
CouchPotatoServer can be run from source. This will use *git* as updater, so make sure that is installed also.
CouchPotatoServer can be run from source. This will use *git* as updater, so make sure that is installed.
Windows, see [the CP forum](http://couchpota.to/forum/showthread.php?tid=14) for more details:
@@ -17,9 +17,9 @@ Windows, see [the CP forum](http://couchpota.to/forum/showthread.php?tid=14) for
* Open up `Git Bash` (or CMD) and go to the folder you want to install CP. Something like Program Files.
* Run `git clone https://github.com/RuudBurger/CouchPotatoServer.git`.
* You can now start CP via `CouchPotatoServer\CouchPotato.py` to start
* Your browser should open up, but if it doesn't go to: `http://localhost:5050/`
* Your browser should open up, but if it doesn't go to `http://localhost:5050/`
OSx:
OS X:
* If you're on Leopard (10.5) install Python 2.6+: [Python 2.6.5](http://www.python.org/download/releases/2.6.5/)
* Install [GIT](http://git-scm.com/)
@@ -27,16 +27,37 @@ OSx:
* Go to your App folder `cd /Applications`
* Run `git clone https://github.com/RuudBurger/CouchPotatoServer.git`
* Then do `python CouchPotatoServer/CouchPotato.py`
* Your browser should open up, but if it doesn't go to: `http://localhost:5050/`
* Your browser should open up, but if it doesn't go to `http://localhost:5050/`
Linux (ubuntu / debian):
Linux (Ubuntu / Debian):
* Install [GIT](http://git-scm.com/) with `apt-get install git-core`
* 'cd' to the folder of your choosing.
* Run `git clone https://github.com/RuudBurger/CouchPotatoServer.git`
* Then do `python CouchPotatoServer/CouchPotato.py` to start
* To run on boot copy the init script. `sudo cp CouchPotatoServer/init/ubuntu /etc/init.d/couchpotato`
* Change the paths inside the init script. `sudo nano /etc/init.d/couchpotato`
* Make it executable. `sudo chmod +x /etc/init.d/couchpotato`
* Add it to defaults. `sudo update-rc.d couchpotato defaults`
* Open your browser and go to: `http://localhost:5050/`
* To run on boot copy the init script `sudo cp CouchPotatoServer/init/ubuntu /etc/init.d/couchpotato`
* Copy the default paths file `sudo cp CouchPotatoServer/init/ubuntu.default /etc/default/couchpotato`
* Change the paths inside the default file `sudo nano /etc/default/couchpotato`
* Make it executable `sudo chmod +x /etc/init.d/couchpotato`
* Add it to defaults `sudo update-rc.d couchpotato defaults`
* Open your browser and go to `http://localhost:5050/`
FreeBSD :
* Update your ports tree `sudo portsnap fetch update`
* Install Python 2.6+ [lang/python](http://www.freshports.org/lang/python) with `cd /usr/ports/lang/python; sudo make install clean`
* Install port [databases/py-sqlite3](http://www.freshports.org/databases/py-sqlite3) with `cd /usr/ports/databases/py-sqlite3; sudo make install clean`
* Add a symlink to 'python2' `sudo ln -s /usr/local/bin/python /usr/local/bin/python2`
* Install port [ftp/libcurl](http://www.freshports.org/ftp/libcurl) with `cd /usr/ports/ftp/fpc-libcurl; sudo make install clean`
* Install port [ftp/curl](http://www.freshports.org/ftp/bcurl), deselect 'Asynchronous DNS resolution via c-ares' when prompted as part of config `cd /usr/ports/ftp/fpc-libcurl; sudo make install clean`
* Install port [textproc/docbook-xml-450](http://www.freshports.org/textproc/docbook-xml-450) with `cd /usr/ports/textproc/docbook-xml-450; sudo make install clean`
* Install port [GIT](http://git-scm.com/) with `cd /usr/ports/devel/git; sudo make install clean`
* 'cd' to the folder of your choosing.
* Run `git clone https://github.com/RuudBurger/CouchPotatoServer.git`
* Then run `sudo python CouchPotatoServer/CouchPotato.py` to start for the first time
* To run on boot copy the init script. `sudo cp CouchPotatoServer/init/freebsd /etc/rc.d/couchpotato`
* Change the paths inside the init script. `sudo vim /etc/rc.d/couchpotato`
* Make init script executable. `sudo chmod +x /etc/rc.d/couchpotato`
* Add init to startup. `sudo echo 'couchpotato_enable="YES"' >> /etc/rc.conf`
* Open your browser and go to: `http://server:5050/`

View File

@@ -1,15 +1,36 @@
#So you feel like posting a bug, sending me a pull request or just telling me how awesome I am. No problem!
# Contributing to CouchPotatoServer
##Just make sure you think of the following things:
1. [Contributing](#contributing)
2. [Submitting an Issue](#issues)
3. [Submitting a Pull Request](#pull-requests)
* Search through the existing (and closed) issues first. See if you can get your answer there.
* Double check the result manually, because it could be an external issue.
* Post logs! Without seeing what is going on, I can't reproduce the error.
* What is the movie + quality you are searching for.
* What are you settings for the specific problem.
* What providers are you using. (While your logs include these, scanning through hundred of lines of log isn't my hobby).
* Give me a short step by step of how to reproduce.
* What hardware / OS are you using and what are the limits? NAS can be slow and maybe have a different python installed then when you use CP on OSX or Windows for example.
* I will mark issues with the "can't reproduce" tag. Don't go asking me "why closed" if it clearly says the issue in the tag ;)
## Contributing
Thank you for your interest in contributing to CouchPotato. There are several ways to help out, even if you've never worked on an open source project before.
If you've found a bug or want to request a feature, you can report it by [posting an issue](https://github.com/RuudBurger/CouchPotatoServer/issues/new) - be sure to read the [guidelines](#issues) first!
If you want to contribute your own work, please read the [guidelines](#pull-requests) for submitting a pull request.
Lastly, for anything related to CouchPotato, feel free to stop by the [forum](http://couchpota.to/forum/) or the [#couchpotato](http://webchat.freenode.net/?channels=couchpotato) IRC channel at irc.freenode.net.
**If I don't get enough info, the change of the issue getting closed is a lot bigger ;)**
## Issues
Issues are intended for reporting bugs and weird behaviour or suggesting improvements to CouchPotatoServer.
Before you submit an issue, please go through the following checklist:
* Search through existing issues (*including closed issues!*) first: you might be able to get your answer there.
* Double check your issue manually, because it could be an external issue.
* Post logs with your issue: Without seeing what is going on, the developers can't reproduce the error.
* Check the logs yourself before submitting them. Obvious errors like permission or HTTP errors are often not related to CouchPotato.
* What movie and quality are you searching for?
* What are your settings for the specific problem?
* What providers are you using? (While your logs include these, scanning through hundreds of lines of logs isn't our hobby)
* Post the logs from the *config* directory, please do not copy paste the UI. Use pastebin to store these logs!
* Give a short step by step of how to reproduce the error.
* What hardware / OS are you using and what are its limitations? For example: NAS can be slow and maybe have a different version of python installed than when you use CP on OS X or Windows.
* Your issue might be marked with the "can't reproduce" tag. Don't ask why your issue was closed if it says so in the tag.
* If you're running on a NAS (QNAP, Austor, Synology etc.) with pre-made packages, make sure these are set up to use our source repository (RuudBurger/CouchPotatoServer) and nothing else!
The more relevant information you provide, the more likely that your issue will be resolved.
## Pull Requests
Pull requests are intended for contributing code or documentation to the project. Before you submit a pull request, consider the following:
* Make sure your pull request is made for the *develop* branch (or relevant feature branch).
* Have you tested your PR? If not, why?
* Does your PR have any limitations we should know of?
* Is your PR up-to-date with the branch you're trying to push into?

View File

@@ -1,83 +1,150 @@
from couchpotato.api import api_docs, api_docs_missing
from couchpotato.core.auth import requires_auth
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.request import getParams, jsonified
from couchpotato.core.helpers.variable import md5
from couchpotato.core.logger import CPLog
from couchpotato.environment import Env
from flask.app import Flask
from flask.blueprints import Blueprint
from flask.globals import request
from flask.helpers import url_for
from flask.templating import render_template
from sqlalchemy.engine import create_engine
from sqlalchemy.orm import scoped_session
from sqlalchemy.orm.session import sessionmaker
from werkzeug.utils import redirect
import os
import time
import traceback
from couchpotato.api import api_docs, api_docs_missing, api
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.variable import md5, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.environment import Env
from tornado import template
from tornado.web import RequestHandler, authenticated
log = CPLog(__name__)
app = Flask(__name__, static_folder = 'nope')
web = Blueprint('web', __name__)
views = {}
template_loader = template.Loader(os.path.join(os.path.dirname(__file__), 'templates'))
def get_session(engine = None):
return Env.getSession(engine)
class BaseHandler(RequestHandler):
def addView(route, func, static = False):
web.add_url_rule(route + ('' if static else '/'), endpoint = route if route else 'index', view_func = func)
def get_current_user(self):
username = Env.setting('username')
password = Env.setting('password')
""" Web view """
@web.route('/')
@requires_auth
if username and password:
return self.get_secure_cookie('user')
else: # Login when no username or password are set
return True
# Main web handler
class WebHandler(BaseHandler):
@authenticated
def get(self, route, *args, **kwargs):
route = route.strip('/')
if not views.get(route):
page_not_found(self)
return
try:
self.write(views[route]())
except:
log.error("Failed doing web request '%s': %s", (route, traceback.format_exc()))
self.write({'success': False, 'error': 'Failed returning results'})
def addView(route, func):
views[route] = func
def get_db():
return Env.get('db')
# Web view
def index():
return render_template('index.html', sep = os.sep, fireEvent = fireEvent, env = Env)
return template_loader.load('index.html').generate(sep = os.sep, fireEvent = fireEvent, Env = Env)
addView('', index)
""" Api view """
@web.route('docs/')
@requires_auth
# API docs
def apiDocs():
from couchpotato import app
routes = []
for route, x in sorted(app.view_functions.iteritems()):
if route[0:4] == 'api.':
routes += [route[4:].replace('::', '.')]
routes = list(api.keys())
if api_docs.get(''):
del api_docs['']
del api_docs_missing['']
return render_template('api.html', fireEvent = fireEvent, routes = sorted(routes), api_docs = api_docs, api_docs_missing = sorted(api_docs_missing))
@web.route('getkey/')
def getApiKey():
return template_loader.load('api.html').generate(fireEvent = fireEvent, routes = sorted(routes), api_docs = api_docs, api_docs_missing = sorted(api_docs_missing), Env = Env)
api = None
params = getParams()
username = Env.setting('username')
password = Env.setting('password')
addView('docs', apiDocs)
if (params.get('u') == md5(username) or not username) and (params.get('p') == password or not password):
api = Env.setting('api_key')
return jsonified({
'success': api is not None,
'api_key': api
})
# Database debug manager
def databaseManage():
return template_loader.load('database.html').generate(fireEvent = fireEvent, Env = Env)
@app.errorhandler(404)
def page_not_found(error):
index_url = url_for('web.index')
url = request.path[len(index_url):]
addView('database', databaseManage)
# Make non basic auth option to get api key
class KeyHandler(RequestHandler):
def get(self, *args, **kwargs):
api_key = None
try:
username = Env.setting('username')
password = Env.setting('password')
if (self.get_argument('u') == md5(username) or not username) and (self.get_argument('p') == password or not password):
api_key = Env.setting('api_key')
self.write({
'success': api_key is not None,
'api_key': api_key
})
except:
log.error('Failed doing key request: %s', (traceback.format_exc()))
self.write({'success': False, 'error': 'Failed returning results'})
class LoginHandler(BaseHandler):
def get(self, *args, **kwargs):
if self.get_current_user():
self.redirect(Env.get('web_base'))
else:
self.write(template_loader.load('login.html').generate(sep = os.sep, fireEvent = fireEvent, Env = Env))
def post(self, *args, **kwargs):
api_key = None
username = Env.setting('username')
password = Env.setting('password')
if (self.get_argument('username') == username or not username) and (md5(self.get_argument('password')) == password or not password):
api_key = Env.setting('api_key')
if api_key:
remember_me = tryInt(self.get_argument('remember_me', default = 0))
self.set_secure_cookie('user', api_key, expires_days = 30 if remember_me > 0 else None)
self.redirect(Env.get('web_base'))
class LogoutHandler(BaseHandler):
def get(self, *args, **kwargs):
self.clear_cookie('user')
self.redirect('%slogin/' % Env.get('web_base'))
def page_not_found(rh):
index_url = Env.get('web_base')
url = rh.request.uri[len(index_url):]
if url[:3] != 'api':
if request.path != '/':
r = request.url.replace(request.path, index_url + '#' + url)
else:
r = '%s%s' % (request.url.rstrip('/'), index_url + '#' + url)
return redirect(r)
r = index_url + '#' + url.lstrip('/')
rh.redirect(r)
else:
time.sleep(0.1)
return 'Wrong API key used', 404
if not Env.get('dev'):
time.sleep(0.1)
rh.set_status(404)
rh.write('Wrong API key used')

View File

@@ -1,50 +1,77 @@
from flask.blueprints import Blueprint
from flask.helpers import url_for
from tornado.web import RequestHandler, asynchronous
from werkzeug.utils import redirect
from functools import wraps
from threading import Thread
import json
import threading
import traceback
import urllib
api = Blueprint('api', __name__)
api_docs = {}
api_docs_missing = []
from couchpotato.core.helpers.request import getParams
from couchpotato.core.logger import CPLog
from tornado.web import RequestHandler, asynchronous
log = CPLog(__name__)
api = {}
api_locks = {}
api_nonblock = {}
api_docs = {}
api_docs_missing = []
def run_async(func):
@wraps(func)
def async_func(*args, **kwargs):
func_hl = Thread(target = func, args = args, kwargs = kwargs)
func_hl.start()
return async_func
@run_async
def run_handler(route, kwargs, callback = None):
try:
res = api[route](**kwargs)
callback(res, route)
except:
log.error('Failed doing api request "%s": %s', (route, traceback.format_exc()))
callback({'success': False, 'error': 'Failed returning results'}, route)
# NonBlock API handler
class NonBlockHandler(RequestHandler):
def __init__(self, application, request, **kwargs):
cls = NonBlockHandler
cls.stoppers = []
super(NonBlockHandler, self).__init__(application, request, **kwargs)
stopper = None
@asynchronous
def get(self, route):
cls = NonBlockHandler
def get(self, route, *args, **kwargs):
route = route.strip('/')
start, stop = api_nonblock[route]
cls.stoppers.append(stop)
self.stopper = stop
start(self.onNewMessage, last_id = self.get_argument("last_id", None))
start(self.onNewMessage, last_id = self.get_argument('last_id', None))
def onNewMessage(self, response):
if self.request.connection.stream.closed():
self.on_connection_close()
return
self.finish(response)
try:
self.finish(response)
except:
log.debug('Failed doing nonblock request, probably already closed: %s', (traceback.format_exc()))
try: self.finish({'success': False, 'error': 'Failed returning results'})
except: pass
def on_connection_close(self):
cls = NonBlockHandler
for stop in cls.stoppers:
stop(self.onNewMessage)
if self.stopper:
self.stopper(self.onNewMessage)
cls.stoppers = []
self.stopper = None
def addApiView(route, func, static = False, docs = None, **kwargs):
api.add_url_rule(route + ('' if static else '/'), endpoint = route.replace('.', '::') if route else 'index', view_func = func, **kwargs)
if docs:
api_docs[route[4:] if route[0:4] == 'api.' else route] = docs
else:
api_docs_missing.append(route)
def addNonBlockApiView(route, func_tuple, docs = None, **kwargs):
api_nonblock[route] = func_tuple
@@ -53,9 +80,85 @@ def addNonBlockApiView(route, func_tuple, docs = None, **kwargs):
else:
api_docs_missing.append(route)
""" Api view """
def index():
index_url = url_for('web.index')
return redirect(index_url + 'docs/')
addApiView('', index)
# Blocking API handler
class ApiHandler(RequestHandler):
@asynchronous
def get(self, route, *args, **kwargs):
route = route.strip('/')
if not api.get(route):
self.write('API call doesn\'t seem to exist')
self.finish()
return
# Create lock if it doesn't exist
if route in api_locks and not api_locks.get(route):
api_locks[route] = threading.Lock()
api_locks[route].acquire()
try:
kwargs = {}
for x in self.request.arguments:
kwargs[x] = urllib.unquote(self.get_argument(x))
# Split array arguments
kwargs = getParams(kwargs)
kwargs['_request'] = self
# Remove t random string
try: del kwargs['t']
except: pass
# Add async callback handler
run_handler(route, kwargs, callback = self.taskFinished)
except:
log.error('Failed doing api request "%s": %s', (route, traceback.format_exc()))
try:
self.write({'success': False, 'error': 'Failed returning results'})
self.finish()
except:
log.error('Failed write error "%s": %s', (route, traceback.format_exc()))
api_locks[route].release()
post = get
def taskFinished(self, result, route):
if not self.request.connection.stream.closed():
try:
# Check JSONP callback
jsonp_callback = self.get_argument('callback_func', default = None)
if jsonp_callback:
self.write(str(jsonp_callback) + '(' + json.dumps(result) + ')')
self.set_header("Content-Type", "text/javascript")
self.finish()
elif isinstance(result, tuple) and result[0] == 'redirect':
self.redirect(result[1])
else:
self.write(result)
self.finish()
except:
log.debug('Failed doing request, probably already closed: %s', (traceback.format_exc()))
try: self.finish({'success': False, 'error': 'Failed returning results'})
except: pass
api_locks[route].release()
def addApiView(route, func, static = False, docs = None, **kwargs):
if static: func(route)
else:
api[route] = func
api_locks[route] = threading.Lock()
if docs:
api_docs[route[4:] if route[0:4] == 'api.' else route] = docs
else:
api_docs_missing.append(route)

View File

@@ -1,11 +1,3 @@
from couchpotato.api import addApiView
from couchpotato.core.event import fireEvent, addEvent
from couchpotato.core.helpers.request import jsonified
from couchpotato.core.helpers.variable import cleanHost, md5
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
from tornado.ioloop import IOLoop
from uuid import uuid4
import os
import platform
@@ -14,8 +6,19 @@ import time
import traceback
import webbrowser
from couchpotato.api import addApiView
from couchpotato.core.event import fireEvent, addEvent
from couchpotato.core.helpers.variable import cleanHost, md5, isSubFolder
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
from tornado.ioloop import IOLoop
log = CPLog(__name__)
autoload = 'Core'
class Core(Plugin):
@@ -48,6 +51,7 @@ class Core(Plugin):
addEvent('app.api_url', self.createApiUrl)
addEvent('app.version', self.version)
addEvent('app.load', self.checkDataDir)
addEvent('app.load', self.cleanUpFolders)
addEvent('setting.save.core.password', self.md5Password)
addEvent('setting.save.core.api_key', self.checkApikey)
@@ -56,40 +60,52 @@ class Core(Plugin):
if not Env.get('desktop'):
self.signalHandler()
# Set default urlopen timeout
import socket
socket.setdefaulttimeout(30)
def md5Password(self, value):
return md5(value.encode(Env.get('encoding'))) if value else ''
return md5(value) if value else ''
def checkApikey(self, value):
return value if value and len(value) > 3 else uuid4().hex
def checkDataDir(self):
if Env.get('app_dir') in Env.get('data_dir'):
if isSubFolder(Env.get('data_dir'), Env.get('app_dir')):
log.error('You should NOT use your CouchPotato directory to save your settings in. Files will get overwritten or be deleted.')
return True
def available(self):
return jsonified({
'success': True
})
def cleanUpFolders(self):
only_clean = ['couchpotato', 'libs', 'init']
self.deleteEmptyFolder(Env.get('app_dir'), show_error = False, only_clean = only_clean)
def shutdown(self):
def available(self, **kwargs):
return {
'success': True
}
def shutdown(self, **kwargs):
if self.shutdown_started:
return False
def shutdown():
self.initShutdown()
IOLoop.instance().add_callback(shutdown)
if IOLoop.current()._closing:
shutdown()
else:
IOLoop.current().add_callback(shutdown)
return 'shutdown'
def restart(self):
def restart(self, **kwargs):
if self.shutdown_started:
return False
def restart():
self.initShutdown(restart = True)
IOLoop.instance().add_callback(restart)
IOLoop.current().add_callback(restart)
return 'restarting'
@@ -102,7 +118,7 @@ class Core(Plugin):
self.shutdown_started = True
fireEvent('app.do_shutdown')
fireEvent('app.do_shutdown', restart = restart)
log.debug('Every plugin got shutdown event')
loop = True
@@ -114,7 +130,7 @@ class Core(Plugin):
if len(still_running) == 0:
break
elif starttime < time.time() - 30: # Always force break after 30s wait
elif starttime < time.time() - 30: # Always force break after 30s wait
break
running = list(set(still_running) - set(self.ignore_restart))
@@ -125,10 +141,13 @@ class Core(Plugin):
time.sleep(1)
log.debug('Save to shutdown/restart')
log.debug('Safe to shutdown/restart')
loop = IOLoop.current()
try:
IOLoop.instance().stop()
if not loop._closing:
loop.stop()
except RuntimeError:
pass
except:
@@ -156,10 +175,10 @@ class Core(Plugin):
host = 'localhost'
port = Env.setting('port')
return '%s:%d%s' % (cleanHost(host).rstrip('/'), int(port), '/' + Env.setting('url_base').lstrip('/') if Env.setting('url_base') else '')
return '%s:%d%s' % (cleanHost(host).rstrip('/'), int(port), Env.get('web_base'))
def createApiUrl(self):
return '%s/api/%s' % (self.createBaseUrl(), Env.setting('api_key'))
return '%sapi/%s' % (self.createBaseUrl(), Env.setting('api_key'))
def version(self):
ver = fireEvent('updater.info', single = True)
@@ -170,16 +189,112 @@ class Core(Plugin):
return '%s - %s-%s - v2' % (platf, ver.get('version')['type'], ver.get('version')['hash'])
def versionView(self):
return jsonified({
def versionView(self, **kwargs):
return {
'version': self.version()
})
}
def signalHandler(self):
if Env.get('daemonized'): return
def signal_handler(signal, frame):
fireEvent('app.shutdown')
def signal_handler(*args, **kwargs):
fireEvent('app.shutdown', single = True)
signal.signal(signal.SIGINT, signal_handler)
signal.signal(signal.SIGTERM, signal_handler)
config = [{
'name': 'core',
'order': 1,
'groups': [
{
'tab': 'general',
'name': 'basics',
'description': 'Needs restart before changes take effect.',
'wizard': True,
'options': [
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'port',
'default': 5050,
'type': 'int',
'description': 'The port I should listen to.',
},
{
'name': 'ssl_cert',
'description': 'Path to SSL server.crt',
'advanced': True,
},
{
'name': 'ssl_key',
'description': 'Path to SSL server.key',
'advanced': True,
},
{
'name': 'launch_browser',
'default': True,
'type': 'bool',
'description': 'Launch the browser when I start.',
'wizard': True,
},
],
},
{
'tab': 'general',
'name': 'advanced',
'description': "For those who know what they're doing",
'advanced': True,
'options': [
{
'name': 'api_key',
'default': uuid4().hex,
'readonly': 1,
'description': 'Let 3rd party app do stuff. <a target="_self" href="../../docs/">Docs</a>',
},
{
'name': 'debug',
'default': 0,
'type': 'bool',
'description': 'Enable debugging.',
},
{
'name': 'development',
'default': 0,
'type': 'bool',
'description': 'Enable this if you\'re developing, and NOT in any other case, thanks.',
},
{
'name': 'data_dir',
'type': 'directory',
'description': 'Where cache/logs/etc are stored. Keep empty for defaults.',
},
{
'name': 'url_base',
'default': '',
'description': 'When using mod_proxy use this to append the url with this.',
},
{
'name': 'permission_folder',
'default': '0755',
'label': 'Folder CHMOD',
'description': 'Can be either decimal (493) or octal (leading zero: 0755)',
},
{
'name': 'permission_file',
'default': '0755',
'label': 'File CHMOD',
'description': 'Same as Folder CHMOD but for files',
},
],
},
],
}]

View File

@@ -1,100 +0,0 @@
from .main import Core
from uuid import uuid4
def start():
return Core()
config = [{
'name': 'core',
'order': 1,
'groups': [
{
'tab': 'general',
'name': 'basics',
'description': 'Needs restart before changes take effect.',
'wizard': True,
'options': [
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'port',
'default': 5050,
'type': 'int',
'description': 'The port I should listen to.',
},
{
'name': 'ssl_cert',
'description': 'Path to SSL server.crt',
'advanced': True,
},
{
'name': 'ssl_key',
'description': 'Path to SSL server.key',
'advanced': True,
},
{
'name': 'launch_browser',
'default': True,
'type': 'bool',
'description': 'Launch the browser when I start.',
'wizard': True,
},
],
},
{
'tab': 'general',
'name': 'advanced',
'description': "For those who know what they're doing",
'advanced': True,
'options': [
{
'name': 'api_key',
'default': uuid4().hex,
'readonly': 1,
'description': 'Let 3rd party app do stuff. <a target="_self" href="../../docs/">Docs</a>',
},
{
'name': 'debug',
'default': 0,
'type': 'bool',
'description': 'Enable debugging.',
},
{
'name': 'development',
'default': 0,
'type': 'bool',
'description': 'Disables some checks/downloads for faster reloading.',
},
{
'name': 'data_dir',
'type': 'directory',
'description': 'Where cache/logs/etc are stored. Keep empty for defaults.',
},
{
'name': 'url_base',
'default': '',
'description': 'When using mod_proxy use this to append the url with this.',
},
{
'name': 'permission_folder',
'default': '0755',
'label': 'Folder CHMOD',
'description': 'Can be either decimal (493) or octal (leading zero: 0755)',
},
{
'name': 'permission_file',
'default': '0755',
'label': 'File CHMOD',
'description': 'Same as Folder CHMOD but for files',
},
],
},
],
}]

View File

@@ -0,0 +1,212 @@
import os
import re
import traceback
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.encoding import ss
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
from minify.cssmin import cssmin
from minify.jsmin import jsmin
from tornado.web import StaticFileHandler
log = CPLog(__name__)
autoload = 'ClientScript'
class ClientScript(Plugin):
core_static = {
'style': [
'style/main.css',
'style/uniform.generic.css',
'style/uniform.css',
'style/settings.css',
],
'script': [
'scripts/library/mootools.js',
'scripts/library/mootools_more.js',
'scripts/library/uniform.js',
'scripts/library/form_replacement/form_check.js',
'scripts/library/form_replacement/form_radio.js',
'scripts/library/form_replacement/form_dropdown.js',
'scripts/library/form_replacement/form_selectoption.js',
'scripts/library/question.js',
'scripts/library/scrollspy.js',
'scripts/library/spin.js',
'scripts/library/Array.stableSort.js',
'scripts/library/async.js',
'scripts/couchpotato.js',
'scripts/api.js',
'scripts/library/history.js',
'scripts/page.js',
'scripts/block.js',
'scripts/block/navigation.js',
'scripts/block/footer.js',
'scripts/block/menu.js',
'scripts/page/home.js',
'scripts/page/settings.js',
'scripts/page/about.js',
],
}
urls = {'style': {}, 'script': {}}
minified = {'style': {}, 'script': {}}
paths = {'style': {}, 'script': {}}
comment = {
'style': '/*** %s:%d ***/\n',
'script': '// %s:%d\n'
}
html = {
'style': '<link rel="stylesheet" href="%s" type="text/css">',
'script': '<script type="text/javascript" src="%s"></script>',
}
def __init__(self):
addEvent('register_style', self.registerStyle)
addEvent('register_script', self.registerScript)
addEvent('clientscript.get_styles', self.getStyles)
addEvent('clientscript.get_scripts', self.getScripts)
if not Env.get('dev'):
addEvent('app.load', self.minify)
self.addCore()
def addCore(self):
for static_type in self.core_static:
for rel_path in self.core_static.get(static_type):
file_path = os.path.join(Env.get('app_dir'), 'couchpotato', 'static', rel_path)
core_url = 'static/%s' % rel_path
if static_type == 'script':
self.registerScript(core_url, file_path, position = 'front')
else:
self.registerStyle(core_url, file_path, position = 'front')
def minify(self):
# Create cache dir
cache = Env.get('cache_dir')
parent_dir = os.path.join(cache, 'minified')
self.makeDir(parent_dir)
Env.get('app').add_handlers(".*$", [(Env.get('web_base') + 'minified/(.*)', StaticFileHandler, {'path': parent_dir})])
for file_type in ['style', 'script']:
ext = 'js' if file_type is 'script' else 'css'
positions = self.paths.get(file_type, {})
for position in positions:
files = positions.get(position)
self._minify(file_type, files, position, position + '.' + ext)
def _minify(self, file_type, files, position, out):
cache = Env.get('cache_dir')
out_name = out
out = os.path.join(cache, 'minified', out_name)
raw = []
for file_path in files:
f = open(file_path, 'r').read()
if file_type == 'script':
data = jsmin(f)
else:
data = self.prefix(f)
data = cssmin(data)
data = data.replace('../images/', '../static/images/')
data = data.replace('../fonts/', '../static/fonts/')
data = data.replace('../../static/', '../static/') # Replace inside plugins
raw.append({'file': file_path, 'date': int(os.path.getmtime(file_path)), 'data': data})
# Combine all files together with some comments
data = ''
for r in raw:
data += self.comment.get(file_type) % (ss(r.get('file')), r.get('date'))
data += r.get('data') + '\n\n'
self.createFile(out, data.strip())
if not self.minified.get(file_type):
self.minified[file_type] = {}
if not self.minified[file_type].get(position):
self.minified[file_type][position] = []
minified_url = 'minified/%s?%s' % (out_name, tryInt(os.path.getmtime(out)))
self.minified[file_type][position].append(minified_url)
def getStyles(self, *args, **kwargs):
return self.get('style', *args, **kwargs)
def getScripts(self, *args, **kwargs):
return self.get('script', *args, **kwargs)
def get(self, type, as_html = False, location = 'head'):
data = '' if as_html else []
try:
try:
if not Env.get('dev'):
return self.minified[type][location]
except:
pass
return self.urls[type][location]
except:
log.error('Error getting minified %s, %s: %s', (type, location, traceback.format_exc()))
return data
def registerStyle(self, api_path, file_path, position = 'head'):
self.register(api_path, file_path, 'style', position)
def registerScript(self, api_path, file_path, position = 'head'):
self.register(api_path, file_path, 'script', position)
def register(self, api_path, file_path, type, location):
api_path = '%s?%s' % (api_path, tryInt(os.path.getmtime(file_path)))
if not self.urls[type].get(location):
self.urls[type][location] = []
self.urls[type][location].append(api_path)
if not self.paths[type].get(location):
self.paths[type][location] = []
self.paths[type][location].append(file_path)
prefix_properties = ['border-radius', 'transform', 'transition', 'box-shadow']
prefix_tags = ['ms', 'moz', 'webkit']
def prefix(self, data):
trimmed_data = re.sub('(\t|\n|\r)+', '', data)
new_data = ''
colon_split = trimmed_data.split(';')
for splt in colon_split:
curl_split = splt.strip().split('{')
for curly in curl_split:
curly = curly.strip()
for prop in self.prefix_properties:
if curly[:len(prop) + 1] == prop + ':':
for tag in self.prefix_tags:
new_data += ' -%s-%s; ' % (tag, curly)
new_data += curly + (' { ' if len(curl_split) > 1 else ' ')
new_data += '; '
new_data = new_data.replace('{ ;', '; ').replace('} ;', '} ')
return new_data

View File

@@ -1,6 +0,0 @@
from .main import ClientScript
def start():
return ClientScript()
config = []

View File

@@ -1,56 +0,0 @@
from couchpotato.core.event import addEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
log = CPLog(__name__)
class ClientScript(Plugin):
urls = {
'style': {},
'script': {},
}
html = {
'style': '<link rel="stylesheet" href="%s" type="text/css">',
'script': '<script type="text/javascript" src="%s"></script>',
}
def __init__(self):
addEvent('register_style', self.registerStyle)
addEvent('register_script', self.registerScript)
addEvent('clientscript.get_styles', self.getStyles)
addEvent('clientscript.get_scripts', self.getScripts)
def getStyles(self, *args, **kwargs):
return self.get('style', *args, **kwargs)
def getScripts(self, *args, **kwargs):
return self.get('script', *args, **kwargs)
def get(self, type, as_html = False, location = 'head'):
data = '' if as_html else []
try:
return self.urls[type][location]
except Exception, e:
log.error(e)
return data
def registerStyle(self, path, position = 'head'):
self.register(path, 'style', position)
def registerScript(self, path, position = 'head'):
self.register(path, 'script', position)
def register(self, filepath, type, location):
if not self.urls[type].get(location):
self.urls[type][location] = []
filePath = filepath
self.urls[type][location].append(filePath)

View File

@@ -5,6 +5,9 @@ from couchpotato.environment import Env
log = CPLog(__name__)
autoload = 'Desktop'
if Env.get('desktop'):
class Desktop(Plugin):

View File

@@ -1,6 +0,0 @@
from .main import Desktop
def start():
return Desktop()
config = []

View File

@@ -0,0 +1,20 @@
from .main import Downloader
def autoload():
return Downloader()
config = [{
'name': 'download_providers',
'groups': [
{
'label': 'Downloaders',
'description': 'You can select different downloaders for each type (usenet / torrent)',
'type': 'list',
'name': 'download_providers',
'tab': 'downloaders',
'options': [],
},
],
}]

View File

@@ -0,0 +1,232 @@
from base64 import b32decode, b16encode
import random
import re
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.variable import mergeDicts
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import Provider
from couchpotato.core.plugins.base import Plugin
log = CPLog(__name__)
## This is here to load the static files
class Downloader(Plugin):
pass
class DownloaderBase(Provider):
protocol = []
http_time_between_calls = 0
status_support = True
torrent_sources = [
'https://zoink.it/torrent/%s.torrent',
'http://torrage.com/torrent/%s.torrent',
'https://torcache.net/torrent/%s.torrent',
]
torrent_trackers = [
'udp://tracker.istole.it:80/announce',
'http://tracker.istole.it/announce',
'udp://fr33domtracker.h33t.com:3310/announce',
'http://tracker.publicbt.com/announce',
'udp://tracker.publicbt.com:80/announce',
'http://tracker.ccc.de/announce',
'udp://tracker.ccc.de:80/announce',
'http://exodus.desync.com/announce',
'http://exodus.desync.com:6969/announce',
'http://tracker.publichd.eu/announce',
'udp://tracker.publichd.eu:80/announce',
'http://tracker.openbittorrent.com/announce',
'udp://tracker.openbittorrent.com/announce',
'udp://tracker.openbittorrent.com:80/announce',
'udp://open.demonii.com:1337/announce',
]
def __init__(self):
addEvent('download', self._download)
addEvent('download.enabled', self._isEnabled)
addEvent('download.enabled_protocols', self.getEnabledProtocol)
addEvent('download.status', self._getAllDownloadStatus)
addEvent('download.remove_failed', self._removeFailed)
addEvent('download.pause', self._pause)
addEvent('download.process_complete', self._processComplete)
addApiView('download.%s.test' % self.getName().lower(), self._test)
def getEnabledProtocol(self):
for download_protocol in self.protocol:
if self.isEnabled(manual = True, data = {'protocol': download_protocol}):
return self.protocol
return []
def _download(self, data = None, media = None, manual = False, filedata = None):
if not media: media = {}
if not data: data = {}
if self.isDisabled(manual, data):
return
return self.download(data = data, media = media, filedata = filedata)
def download(self, *args, **kwargs):
return False
def _getAllDownloadStatus(self, download_ids):
if self.isDisabled(manual = True, data = {}):
return
ids = [download_id['id'] for download_id in download_ids if download_id['downloader'] == self.getName()]
if ids:
return self.getAllDownloadStatus(ids)
else:
return
def getAllDownloadStatus(self, ids):
return []
def _removeFailed(self, release_download):
if self.isDisabled(manual = True, data = {}):
return
if release_download and release_download.get('downloader') == self.getName():
if self.conf('delete_failed'):
return self.removeFailed(release_download)
return False
return
def removeFailed(self, release_download):
return
def _processComplete(self, release_download):
if self.isDisabled(manual = True, data = {}):
return
if release_download and release_download.get('downloader') == self.getName():
if self.conf('remove_complete', default = False):
return self.processComplete(release_download = release_download, delete_files = self.conf('delete_files', default = False))
return False
return
def processComplete(self, release_download, delete_files):
return
def isCorrectProtocol(self, protocol):
is_correct = protocol in self.protocol
if not is_correct:
log.debug("Downloader doesn't support this protocol")
return is_correct
def magnetToTorrent(self, magnet_link):
torrent_hash = re.findall('urn:btih:([\w]{32,40})', magnet_link)[0].upper()
# Convert base 32 to hex
if len(torrent_hash) == 32:
torrent_hash = b16encode(b32decode(torrent_hash))
sources = self.torrent_sources
random.shuffle(sources)
for source in sources:
try:
filedata = self.urlopen(source % torrent_hash, headers = {'Referer': ''}, show_error = False)
if 'torcache' in filedata and 'file not found' in filedata.lower():
continue
return filedata
except:
log.debug('Torrent hash "%s" wasn\'t found on: %s', (torrent_hash, source))
log.error('Failed converting magnet url to torrent: %s', torrent_hash)
return False
def downloadReturnId(self, download_id):
return {
'downloader': self.getName(),
'status_support': self.status_support,
'id': download_id
}
def isDisabled(self, manual = False, data = None):
if not data: data = {}
return not self.isEnabled(manual, data)
def _isEnabled(self, manual, data = None):
if not data: data = {}
if not self.isEnabled(manual, data):
return
return True
def isEnabled(self, manual = False, data = None):
if not data: data = {}
d_manual = self.conf('manual', default = False)
return super(DownloaderBase, self).isEnabled() and \
(d_manual and manual or d_manual is False) and \
(not data or self.isCorrectProtocol(data.get('protocol')))
def _test(self, **kwargs):
t = self.test()
if isinstance(t, tuple):
return {'success': t[0], 'msg': t[1]}
return {'success': t}
def test(self):
return False
def _pause(self, release_download, pause = True):
if self.isDisabled(manual = True, data = {}):
return
if release_download and release_download.get('downloader') == self.getName():
self.pause(release_download, pause)
return True
return False
def pause(self, release_download, pause):
return
class ReleaseDownloadList(list):
provider = None
def __init__(self, provider, **kwargs):
self.provider = provider
self.kwargs = kwargs
super(ReleaseDownloadList, self).__init__()
def extend(self, results):
for r in results:
self.append(r)
def append(self, result):
new_result = self.fillResult(result)
super(ReleaseDownloadList, self).append(new_result)
def fillResult(self, result):
defaults = {
'id': 0,
'status': 'busy',
'downloader': self.provider.getName(),
'folder': '',
'files': [],
}
return mergeDicts(defaults, result)

View File

@@ -0,0 +1,76 @@
var DownloadersBase = new Class({
Implements: [Events],
initialize: function(){
var self = this;
// Add test buttons to settings page
App.addEvent('loadSettings', self.addTestButtons.bind(self));
},
// Downloaders setting tests
addTestButtons: function(){
var self = this;
var setting_page = App.getPage('Settings');
setting_page.addEvent('create', function(){
Object.each(setting_page.tabs.downloaders.groups, self.addTestButton.bind(self))
})
},
addTestButton: function(fieldset, plugin_name){
var self = this,
button_name = self.testButtonName(fieldset);
if(button_name.contains('Downloaders')) return;
new Element('.ctrlHolder.test_button').adopt(
new Element('a.button', {
'text': button_name,
'events': {
'click': function(){
var button = fieldset.getElement('.test_button .button');
button.set('text', 'Connecting...');
Api.request('download.'+plugin_name+'.test', {
'onComplete': function(json){
button.set('text', button_name);
var message;
if(json.success){
message = new Element('span.success', {
'text': 'Connection successful'
}).inject(button, 'after')
}
else {
var msg_text = 'Connection failed. Check logs for details.';
if(json.hasOwnProperty('msg')) msg_text = json.msg;
message = new Element('span.failed', {
'text': msg_text
}).inject(button, 'after')
}
(function(){
message.destroy();
}).delay(3000)
}
});
}
}
})
).inject(fieldset);
},
testButtonName: function(fieldset){
var name = String(fieldset.getElement('h2').innerHTML).substring(0,String(fieldset.getElement('h2').innerHTML).indexOf("<span"));
return 'Test '+name;
}
});
var Downloaders = new DownloadersBase();

View File

@@ -5,6 +5,8 @@ from couchpotato.core.plugins.base import Plugin
log = CPLog(__name__)
autoload = 'Scheduler'
class Scheduler(Plugin):
@@ -16,60 +18,29 @@ class Scheduler(Plugin):
addEvent('schedule.cron', self.cron)
addEvent('schedule.interval', self.interval)
addEvent('schedule.start', self.start)
addEvent('schedule.restart', self.start)
addEvent('app.load', self.start)
addEvent('schedule.remove', self.remove)
addEvent('schedule.queue', self.queue)
self.sched = Sched(misfire_grace_time = 60)
self.sched.start()
self.started = True
def remove(self, identifier):
for type in ['interval', 'cron']:
for cron_type in ['intervals', 'crons']:
try:
self.sched.unschedule_job(getattr(self, type)[identifier]['job'])
log.debug('%s unscheduled %s', (type.capitalize(), identifier))
self.sched.unschedule_job(getattr(self, cron_type)[identifier]['job'])
log.debug('%s unscheduled %s', (cron_type.capitalize(), identifier))
except:
pass
def start(self):
# Stop all running
self.stop()
# Crons
for identifier in self.crons:
try:
self.remove(identifier)
cron = self.crons[identifier]
job = self.sched.add_cron_job(cron['handle'], day = cron['day'], hour = cron['hour'], minute = cron['minute'])
cron['job'] = job
except ValueError, e:
log.error('Failed adding cronjob: %s', e)
# Intervals
for identifier in self.intervals:
try:
self.remove(identifier)
interval = self.intervals[identifier]
job = self.sched.add_interval_job(interval['handle'], hours = interval['hours'], minutes = interval['minutes'], seconds = interval['seconds'])
interval['job'] = job
except ValueError, e:
log.error('Failed adding interval cronjob: %s', e)
# Start it
log.debug('Starting scheduler')
self.sched.start()
self.started = True
log.debug('Scheduler started')
def doShutdown(self):
super(Scheduler, self).doShutdown()
def doShutdown(self, *args, **kwargs):
self.stop()
return super(Scheduler, self).doShutdown(*args, **kwargs)
def stop(self):
if self.started:
log.debug('Stopping scheduler')
self.sched.shutdown()
self.sched.shutdown(wait = False)
log.debug('Scheduler stopped')
self.started = False
@@ -82,6 +53,7 @@ class Scheduler(Plugin):
'day': day,
'hour': hour,
'minute': minute,
'job': self.sched.add_cron_job(handle, day = day, hour = hour, minute = minute)
}
def interval(self, identifier = '', handle = None, hours = 0, minutes = 0, seconds = 0):
@@ -93,4 +65,18 @@ class Scheduler(Plugin):
'hours': hours,
'minutes': minutes,
'seconds': seconds,
'job': self.sched.add_interval_job(handle, hours = hours, minutes = minutes, seconds = seconds)
}
return True
def queue(self, handlers = None):
if not handlers: handlers = []
for h in handlers:
h()
if self.shuttingDown():
break
return True

View File

@@ -1,6 +0,0 @@
from .main import Scheduler
def start():
return Scheduler()
config = []

View File

@@ -1,8 +1,10 @@
from .main import Updater
from couchpotato.environment import Env
import os
def start():
from .main import Updater
from couchpotato.environment import Env
def autoload():
return Updater()
config = [{

View File

@@ -1,20 +1,25 @@
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.helpers.encoding import ss
from couchpotato.core.helpers.request import jsonified
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
from datetime import datetime
from dateutil.parser import parse
from git.repository import LocalRepository
import json
import os
import shutil
import tarfile
import time
import traceback
import zipfile
from datetime import datetime
from threading import RLock
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.helpers.encoding import sp
from couchpotato.core.helpers.variable import removePyc
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
from dateutil.parser import parse
from git.repository import LocalRepository
import version
from six.moves import filter
log = CPLog(__name__)
@@ -22,6 +27,7 @@ log = CPLog(__name__)
class Updater(Plugin):
available_notified = False
_lock = RLock()
def __init__(self):
@@ -32,11 +38,11 @@ class Updater(Plugin):
else:
self.updater = SourceUpdater()
fireEvent('schedule.interval', 'updater.check', self.autoUpdate, hours = 6)
addEvent('app.load', self.autoUpdate)
addEvent('app.load', self.logVersion, priority = 10000)
addEvent('app.load', self.setCrons)
addEvent('updater.info', self.info)
addApiView('updater.info', self.getInfo, docs = {
addApiView('updater.info', self.info, docs = {
'desc': 'Get updater information',
'return': {
'type': 'object',
@@ -52,8 +58,21 @@ class Updater(Plugin):
'return': {'type': 'see updater.info'}
})
addEvent('setting.save.updater.enabled.after', self.setCrons)
def logVersion(self):
info = self.info()
log.info('=== VERSION %s, using %s ===', (info.get('version', {}).get('repr', 'UNKNOWN'), self.updater.getName()))
def setCrons(self):
fireEvent('schedule.remove', 'updater.check', single = True)
if self.isEnabled():
fireEvent('schedule.interval', 'updater.check', self.autoUpdate, hours = 6)
self.autoUpdate() # Check after enabling
def autoUpdate(self):
if self.check() and self.conf('automatic') and not self.updater.update_failed:
if self.isEnabled() and self.check() and self.conf('automatic') and not self.updater.update_failed:
if self.updater.doUpdate():
# Notify before restarting
@@ -71,31 +90,40 @@ class Updater(Plugin):
return False
def check(self):
if self.isDisabled():
def check(self, force = False):
if not force and self.isDisabled():
return
if self.updater.check():
if not self.available_notified and self.conf('notification') and not self.conf('automatic'):
fireEvent('updater.available', message = 'A new update is available', data = self.updater.info())
info = self.updater.info()
version_date = datetime.fromtimestamp(info['update_version']['date'])
fireEvent('updater.available', message = 'A new update with hash "%s" is available, this version is from %s' % (info['update_version']['hash'], version_date), data = info)
self.available_notified = True
return True
return False
def info(self):
return self.updater.info()
def info(self, **kwargs):
self._lock.acquire()
def getInfo(self):
return jsonified(self.updater.info())
info = {}
try:
info = self.updater.info()
except:
log.error('Failed getting updater info: %s', traceback.format_exc())
def checkView(self):
return jsonified({
'update_available': self.check(),
self._lock.release()
return info
def checkView(self, **kwargs):
return {
'update_available': self.check(force = True),
'info': self.updater.info()
})
}
def doUpdateView(self):
def doUpdateView(self, **kwargs):
self.check()
if not self.updater.update_version:
@@ -110,9 +138,15 @@ class Updater(Plugin):
if not success:
success = True
return jsonified({
return {
'success': success
})
}
def doShutdown(self, *args, **kwargs):
if not Env.get('dev') and not Env.get('desktop'):
removePyc(Env.get('app_dir'), show_logs = False)
return super(Updater, self).doShutdown(*args, **kwargs)
class BaseUpdater(Plugin):
@@ -125,50 +159,29 @@ class BaseUpdater(Plugin):
update_failed = False
update_version = None
last_check = 0
auto_register_static = False
def doUpdate(self):
pass
def getInfo(self):
return jsonified(self.info())
def info(self):
current_version = self.getVersion()
return {
'last_check': self.last_check,
'update_version': self.update_version,
'version': self.getVersion(),
'version': current_version,
'repo_name': '%s/%s' % (self.repo_user, self.repo_name),
'branch': self.branch,
'branch': current_version.get('branch', self.branch),
}
def getVersion(self):
pass
def check(self):
pass
def deletePyc(self, only_excess = True):
for root, dirs, files in os.walk(ss(Env.get('app_dir'))):
pyc_files = filter(lambda filename: filename.endswith('.pyc'), files)
py_files = set(filter(lambda filename: filename.endswith('.py'), files))
excess_pyc_files = filter(lambda pyc_filename: pyc_filename[:-1] not in py_files, pyc_files) if only_excess else pyc_files
for excess_pyc_file in excess_pyc_files:
full_path = os.path.join(root, excess_pyc_file)
log.debug('Removing old PYC file: %s', full_path)
try:
os.remove(full_path)
except:
log.error('Couldn\'t remove %s: %s', (full_path, traceback.format_exc()))
for dir_name in dirs:
full_path = os.path.join(root, dir_name)
if len(os.listdir(full_path)) == 0:
try:
os.rmdir(full_path)
except:
log.error('Couldn\'t remove empty directory %s: %s', (full_path, traceback.format_exc()))
class GitUpdater(BaseUpdater):
@@ -178,15 +191,9 @@ class GitUpdater(BaseUpdater):
def doUpdate(self):
try:
log.debug('Stashing local changes')
self.repo.saveStash()
log.info('Updating to latest version')
self.repo.pull()
# Delete leftover .pyc files
self.deletePyc()
return True
except:
log.error('Failed updating via GIT: %s', traceback.format_exc())
@@ -199,14 +206,16 @@ class GitUpdater(BaseUpdater):
if not self.version:
try:
output = self.repo.getHead() # Yes, please
output = self.repo.getHead() # Yes, please
log.debug('Git version output: %s', output.hash)
self.version = {
'repr': 'git:(%s:%s % s) %s (%s)' % (self.repo_user, self.repo_name, self.repo.getCurrentBranch().name or self.branch, output.hash[:8], datetime.fromtimestamp(output.getDate())),
'hash': output.hash[:8],
'date': output.getDate(),
'type': 'git',
'branch': self.repo.getCurrentBranch().name
}
except Exception, e:
except Exception as e:
log.error('Failed using GIT updater, running from source, you need to have GIT installed. %s', e)
return 'No GIT'
@@ -229,7 +238,7 @@ class GitUpdater(BaseUpdater):
local = self.repo.getHead()
remote = branch.getHead()
log.info('Versions, local:%s, remote:%s', (local.hash[:8], remote.hash[:8]))
log.debug('Versions, local:%s, remote:%s', (local.hash[:8], remote.hash[:8]))
if local.getDate() < remote.getDate():
self.update_version = {
@@ -242,7 +251,6 @@ class GitUpdater(BaseUpdater):
return False
class SourceUpdater(BaseUpdater):
def __init__(self):
@@ -255,11 +263,11 @@ class SourceUpdater(BaseUpdater):
def doUpdate(self):
try:
url = 'https://github.com/%s/%s/tarball/%s' % (self.repo_user, self.repo_name, self.branch)
destination = os.path.join(Env.get('cache_dir'), self.update_version.get('hash') + '.tar.gz')
extracted_path = os.path.join(Env.get('cache_dir'), 'temp_updater')
download_data = fireEvent('cp.source_url', repo = self.repo_user, repo_name = self.repo_name, branch = self.branch, single = True)
destination = os.path.join(Env.get('cache_dir'), self.update_version.get('hash')) + '.' + download_data.get('type')
destination = fireEvent('file.download', url = url, dest = destination, single = True)
extracted_path = os.path.join(Env.get('cache_dir'), 'temp_updater')
destination = fireEvent('file.download', url = download_data.get('url'), dest = destination, single = True)
# Cleanup leftover from last time
if os.path.isdir(extracted_path):
@@ -267,9 +275,15 @@ class SourceUpdater(BaseUpdater):
self.makeDir(extracted_path)
# Extract
tar = tarfile.open(destination)
tar.extractall(path = extracted_path)
tar.close()
if download_data.get('type') == 'zip':
zip_file = zipfile.ZipFile(destination)
zip_file.extractall(extracted_path)
zip_file.close()
else:
tar = tarfile.open(destination)
tar.extractall(path = extracted_path)
tar.close()
os.remove(destination)
if self.replaceWith(os.path.join(extracted_path, os.listdir(extracted_path)[0])):
@@ -286,10 +300,12 @@ class SourceUpdater(BaseUpdater):
return False
def replaceWith(self, path):
app_dir = ss(Env.get('app_dir'))
path = sp(path)
app_dir = Env.get('app_dir')
data_dir = Env.get('data_dir')
# Get list of files we want to overwrite
self.deletePyc()
removePyc(app_dir)
existing_files = []
for root, subfiles, filenames in os.walk(app_dir):
for filename in filenames:
@@ -318,22 +334,24 @@ class SourceUpdater(BaseUpdater):
log.error('Failed overwriting file "%s": %s', (tofile, traceback.format_exc()))
return False
if Env.get('app_dir') not in Env.get('data_dir'):
for still_exists in existing_files:
try:
os.remove(still_exists)
except:
log.error('Failed removing non-used file: %s', traceback.format_exc())
for still_exists in existing_files:
if data_dir in still_exists:
continue
try:
os.remove(still_exists)
except:
log.error('Failed removing non-used file: %s', traceback.format_exc())
return True
def removeDir(self, path):
try:
if os.path.isdir(path):
shutil.rmtree(path)
except OSError, inst:
os.chmod(inst.filename, 0777)
except OSError as inst:
os.chmod(inst.filename, 0o777)
self.removeDir(path)
def getVersion(self):
@@ -347,7 +365,8 @@ class SourceUpdater(BaseUpdater):
log.debug('Source version output: %s', output)
self.version = output
self.version['type'] = 'source'
except Exception, e:
self.version['repr'] = 'source:(%s:%s % s) %s (%s)' % (self.repo_user, self.repo_name, self.branch, output.get('hash', '')[:8], datetime.fromtimestamp(output.get('date', 0)))
except Exception as e:
log.error('Failed using source updater. %s', e)
return {}
@@ -377,7 +396,7 @@ class SourceUpdater(BaseUpdater):
return {
'hash': commit['sha'],
'date': int(time.mktime(parse(commit['commit']['committer']['date']).timetuple())),
'date': int(time.mktime(parse(commit['commit']['committer']['date']).timetuple())),
}
except:
log.error('Failed getting latest request from github: %s', traceback.format_exc())
@@ -422,7 +441,7 @@ class DesktopUpdater(BaseUpdater):
if latest and latest != current_version.get('hash'):
self.update_version = {
'hash': latest,
'date': None,
'date': None,
'changelog': self.desktop._changelogURL,
}
@@ -434,6 +453,7 @@ class DesktopUpdater(BaseUpdater):
def getVersion(self):
return {
'repr': 'desktop: %s' % self.desktop._esky.active_version,
'hash': self.desktop._esky.active_version,
'date': None,
'type': 'desktop',

View File

@@ -5,7 +5,7 @@ var UpdaterBase = new Class({
initialize: function(){
var self = this;
App.addEvent('load', self.info.bind(self, 1000))
App.addEvent('load', self.info.bind(self, 2000));
App.addEvent('unload', function(){
if(self.timer)
clearTimeout(self.timer);
@@ -24,7 +24,7 @@ var UpdaterBase = new Class({
self.doUpdate();
else {
App.unBlockPage();
App.fireEvent('message', 'No updates available');
App.trigger('message', ['No updates available']);
}
}
})
@@ -66,7 +66,7 @@ var UpdaterBase = new Class({
var changelog = 'https://github.com/'+data.repo_name+'/compare/'+data.version.hash+'...'+data.branch;
if(data.update_version.changelog)
changelog = data.update_version.changelog + '#' + data.version.hash+'...'+data.update_version.hash
changelog = data.update_version.changelog + '#' + data.version.hash+'...'+data.update_version.hash;
self.message = new Element('div.message.update').adopt(
new Element('span', {
@@ -84,7 +84,7 @@ var UpdaterBase = new Class({
'click': self.doUpdate.bind(self)
}
})
).inject($(document.body).getElement('.header'))
).inject(document.body)
},
doUpdate: function(){

View File

@@ -1,26 +0,0 @@
from couchpotato.core.helpers.variable import md5
from couchpotato.environment import Env
from flask import request, Response
from functools import wraps
def check_auth(username, password):
return username == Env.setting('username') and password == Env.setting('password')
def authenticate():
return Response(
'This is not the page you are looking for. *waves hand*', 401,
{'WWW-Authenticate': 'Basic realm="CouchPotato Login"'}
)
def requires_auth(f):
@wraps(f)
def decorated(*args, **kwargs):
auth = getattr(request, 'authorization')
if Env.setting('username') and Env.setting('password'):
if (not auth or not check_auth(auth.username.decode('latin1'), md5(auth.password.decode('latin1').encode(Env.get('encoding'))))):
return authenticate()
return f(*args, **kwargs)
return decorated

View File

@@ -0,0 +1,607 @@
import json
import os
import time
import traceback
from CodernityDB.database import RecordNotFound
from CodernityDB.index import IndexException, IndexNotFoundException, IndexConflict
from couchpotato import CPLog
from couchpotato.api import addApiView
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.helpers.encoding import toUnicode, sp
from couchpotato.core.helpers.variable import getImdb, tryInt
log = CPLog(__name__)
class Database(object):
indexes = None
db = None
def __init__(self):
self.indexes = {}
addApiView('database.list_documents', self.listDocuments)
addApiView('database.reindex', self.reindex)
addApiView('database.compact', self.compact)
addApiView('database.document.update', self.updateDocument)
addApiView('database.document.delete', self.deleteDocument)
addEvent('database.setup.after', self.startup_compact)
addEvent('database.setup_index', self.setupIndex)
addEvent('app.migrate', self.migrate)
addEvent('app.after_shutdown', self.close)
def getDB(self):
if not self.db:
from couchpotato import get_db
self.db = get_db()
return self.db
def close(self, **kwargs):
self.getDB().close()
def setupIndex(self, index_name, klass):
self.indexes[index_name] = klass
db = self.getDB()
# Category index
index_instance = klass(db.path, index_name)
try:
# Make sure store and bucket don't exist
exists = []
for x in ['buck', 'stor']:
full_path = os.path.join(db.path, '%s_%s' % (index_name, x))
if os.path.exists(full_path):
exists.append(full_path)
if index_name not in db.indexes_names:
# Remove existing buckets if index isn't there
for x in exists:
os.unlink(x)
# Add index (will restore buckets)
db.add_index(index_instance)
db.reindex_index(index_name)
else:
# Previous info
previous = db.indexes_names[index_name]
previous_version = previous._version
current_version = klass._version
# Only edit index if versions are different
if previous_version < current_version:
log.debug('Index "%s" already exists, updating and reindexing', index_name)
db.destroy_index(previous)
db.add_index(index_instance)
db.reindex_index(index_name)
except:
log.error('Failed adding index %s: %s', (index_name, traceback.format_exc()))
def deleteDocument(self, **kwargs):
db = self.getDB()
try:
document_id = kwargs.get('_request').get_argument('id')
document = db.get('id', document_id)
db.delete(document)
return {
'success': True
}
except:
return {
'success': False,
'error': traceback.format_exc()
}
def updateDocument(self, **kwargs):
db = self.getDB()
try:
document = json.loads(kwargs.get('_request').get_argument('document'))
d = db.update(document)
document.update(d)
return {
'success': True,
'document': document
}
except:
return {
'success': False,
'error': traceback.format_exc()
}
def listDocuments(self, **kwargs):
db = self.getDB()
results = {
'unknown': []
}
for document in db.all('id'):
key = document.get('_t', 'unknown')
if kwargs.get('show') and key != kwargs.get('show'):
continue
if not results.get(key):
results[key] = []
results[key].append(document)
return results
def reindex(self, **kwargs):
success = True
try:
db = self.getDB()
db.reindex()
except:
log.error('Failed index: %s', traceback.format_exc())
success = False
return {
'success': success
}
def compact(self, try_repair = True, **kwargs):
success = False
db = self.getDB()
# Removing left over compact files
db_path = sp(db.path)
for f in os.listdir(sp(db.path)):
for x in ['_compact_buck', '_compact_stor']:
if f[-len(x):] == x:
os.unlink(os.path.join(db_path, f))
try:
start = time.time()
size = float(db.get_db_details().get('size', 0))
log.debug('Compacting database, current size: %sMB', round(size/1048576, 2))
db.compact()
new_size = float(db.get_db_details().get('size', 0))
log.debug('Done compacting database in %ss, new size: %sMB, saved: %sMB', (round(time.time()-start, 2), round(new_size/1048576, 2), round((size-new_size)/1048576, 2)))
success = True
except (IndexException, AttributeError):
if try_repair:
log.error('Something wrong with indexes, trying repair')
# Remove all indexes
old_indexes = self.indexes.keys()
for index_name in old_indexes:
try:
db.destroy_index(index_name)
except IndexNotFoundException:
pass
except:
log.error('Failed removing old index %s', index_name)
# Add them again
for index_name in self.indexes:
klass = self.indexes[index_name]
# Category index
index_instance = klass(db.path, index_name)
try:
db.add_index(index_instance)
db.reindex_index(index_name)
except IndexConflict:
pass
except:
log.error('Failed adding index %s', index_name)
raise
self.compact(try_repair = False)
else:
log.error('Failed compact: %s', traceback.format_exc())
except:
log.error('Failed compact: %s', traceback.format_exc())
return {
'success': success
}
# Compact on start
def startup_compact(self):
from couchpotato import Env
db = self.getDB()
# Try fix for migration failures on desktop
if Env.get('desktop'):
try:
list(db.all('profile', with_doc = True))
except RecordNotFound:
failed_location = '%s_failed' % db.path
old_db = os.path.join(Env.get('data_dir'), 'couchpotato.db.old')
if not os.path.isdir(failed_location) and os.path.isfile(old_db):
log.error('Corrupt database, trying migrate again')
db.close()
# Rename database folder
os.rename(db.path, '%s_failed' % db.path)
# Rename .old database to try another migrate
os.rename(old_db, old_db[:-4])
fireEventAsync('app.restart')
else:
log.error('Migration failed and couldn\'t recover database. Please report on GitHub, with this message.')
db.reindex()
return
# Check size and compact if needed
size = db.get_db_details().get('size')
prop_name = 'last_db_compact'
last_check = int(Env.prop(prop_name, default = 0))
if size > 26214400 and last_check < time.time()-604800: # 25MB / 7 days
self.compact()
Env.prop(prop_name, value = int(time.time()))
def migrate(self):
from couchpotato import Env
old_db = os.path.join(Env.get('data_dir'), 'couchpotato.db')
if not os.path.isfile(old_db): return
log.info('=' * 30)
log.info('Migrating database, hold on..')
time.sleep(1)
if os.path.isfile(old_db):
migrate_start = time.time()
import sqlite3
conn = sqlite3.connect(old_db)
migrate_list = {
'category': ['id', 'label', 'order', 'required', 'preferred', 'ignored', 'destination'],
'profile': ['id', 'label', 'order', 'core', 'hide'],
'profiletype': ['id', 'order', 'finish', 'wait_for', 'quality_id', 'profile_id'],
'quality': ['id', 'identifier', 'order', 'size_min', 'size_max'],
'movie': ['id', 'last_edit', 'library_id', 'status_id', 'profile_id', 'category_id'],
'library': ['id', 'identifier', 'info'],
'librarytitle': ['id', 'title', 'default', 'libraries_id'],
'library_files__file_library': ['library_id', 'file_id'],
'release': ['id', 'identifier', 'movie_id', 'status_id', 'quality_id', 'last_edit'],
'releaseinfo': ['id', 'identifier', 'value', 'release_id'],
'release_files__file_release': ['release_id', 'file_id'],
'status': ['id', 'identifier'],
'properties': ['id', 'identifier', 'value'],
'file': ['id', 'path', 'type_id'],
'filetype': ['identifier', 'id']
}
migrate_data = {}
c = conn.cursor()
for ml in migrate_list:
migrate_data[ml] = {}
rows = migrate_list[ml]
try:
c.execute('SELECT %s FROM `%s`' % ('`' + '`,`'.join(rows) + '`', ml))
except:
# ignore faulty destination_id database
if ml == 'category':
migrate_data[ml] = {}
else:
raise
for p in c.fetchall():
columns = {}
for row in migrate_list[ml]:
columns[row] = p[rows.index(row)]
if not migrate_data[ml].get(p[0]):
migrate_data[ml][p[0]] = columns
else:
if not isinstance(migrate_data[ml][p[0]], list):
migrate_data[ml][p[0]] = [migrate_data[ml][p[0]]]
migrate_data[ml][p[0]].append(columns)
conn.close()
log.info('Getting data took %s', time.time() - migrate_start)
db = self.getDB()
if not db.opened:
return
# Use properties
properties = migrate_data['properties']
log.info('Importing %s properties', len(properties))
for x in properties:
property = properties[x]
Env.prop(property.get('identifier'), property.get('value'))
# Categories
categories = migrate_data.get('category', [])
log.info('Importing %s categories', len(categories))
category_link = {}
for x in categories:
c = categories[x]
new_c = db.insert({
'_t': 'category',
'order': c.get('order', 999),
'label': toUnicode(c.get('label', '')),
'ignored': toUnicode(c.get('ignored', '')),
'preferred': toUnicode(c.get('preferred', '')),
'required': toUnicode(c.get('required', '')),
'destination': toUnicode(c.get('destination', '')),
})
category_link[x] = new_c.get('_id')
# Profiles
log.info('Importing profiles')
new_profiles = db.all('profile', with_doc = True)
new_profiles_by_label = {}
for x in new_profiles:
# Remove default non core profiles
if not x['doc'].get('core'):
db.delete(x['doc'])
else:
new_profiles_by_label[x['doc']['label']] = x['_id']
profiles = migrate_data['profile']
profile_link = {}
for x in profiles:
p = profiles[x]
exists = new_profiles_by_label.get(p.get('label'))
# Update existing with order only
if exists and p.get('core'):
profile = db.get('id', exists)
profile['order'] = tryInt(p.get('order'))
profile['hide'] = p.get('hide') in [1, True, 'true', 'True']
db.update(profile)
profile_link[x] = profile.get('_id')
else:
new_profile = {
'_t': 'profile',
'label': p.get('label'),
'order': int(p.get('order', 999)),
'core': p.get('core', False),
'qualities': [],
'wait_for': [],
'finish': []
}
types = migrate_data['profiletype']
for profile_type in types:
p_type = types[profile_type]
if types[profile_type]['profile_id'] == p['id']:
if p_type['quality_id']:
new_profile['finish'].append(p_type['finish'])
new_profile['wait_for'].append(p_type['wait_for'])
new_profile['qualities'].append(migrate_data['quality'][p_type['quality_id']]['identifier'])
if len(new_profile['qualities']) > 0:
new_profile.update(db.insert(new_profile))
profile_link[x] = new_profile.get('_id')
else:
log.error('Corrupt profile list for "%s", using default.', p.get('label'))
# Qualities
log.info('Importing quality sizes')
new_qualities = db.all('quality', with_doc = True)
new_qualities_by_identifier = {}
for x in new_qualities:
new_qualities_by_identifier[x['doc']['identifier']] = x['_id']
qualities = migrate_data['quality']
quality_link = {}
for x in qualities:
q = qualities[x]
q_id = new_qualities_by_identifier[q.get('identifier')]
quality = db.get('id', q_id)
quality['order'] = q.get('order')
quality['size_min'] = tryInt(q.get('size_min'))
quality['size_max'] = tryInt(q.get('size_max'))
db.update(quality)
quality_link[x] = quality
# Titles
titles = migrate_data['librarytitle']
titles_by_library = {}
for x in titles:
title = titles[x]
if title.get('default'):
titles_by_library[title.get('libraries_id')] = title.get('title')
# Releases
releaseinfos = migrate_data['releaseinfo']
for x in releaseinfos:
info = releaseinfos[x]
# Skip if release doesn't exist for this info
if not migrate_data['release'].get(info.get('release_id')):
continue
if not migrate_data['release'][info.get('release_id')].get('info'):
migrate_data['release'][info.get('release_id')]['info'] = {}
migrate_data['release'][info.get('release_id')]['info'][info.get('identifier')] = info.get('value')
releases = migrate_data['release']
releases_by_media = {}
for x in releases:
release = releases[x]
if not releases_by_media.get(release.get('movie_id')):
releases_by_media[release.get('movie_id')] = []
releases_by_media[release.get('movie_id')].append(release)
# Type ids
types = migrate_data['filetype']
type_by_id = {}
for t in types:
type = types[t]
type_by_id[type.get('id')] = type
# Media
log.info('Importing %s media items', len(migrate_data['movie']))
statuses = migrate_data['status']
libraries = migrate_data['library']
library_files = migrate_data['library_files__file_library']
releases_files = migrate_data['release_files__file_release']
all_files = migrate_data['file']
poster_type = migrate_data['filetype']['poster']
medias = migrate_data['movie']
for x in medias:
m = medias[x]
status = statuses.get(m['status_id']).get('identifier')
l = libraries.get(m['library_id'])
# Only migrate wanted movies, Skip if no identifier present
if not l or not getImdb(l.get('identifier')): continue
profile_id = profile_link.get(m['profile_id'])
category_id = category_link.get(m['category_id'])
title = titles_by_library.get(m['library_id'])
releases = releases_by_media.get(x, [])
info = json.loads(l.get('info', ''))
files = library_files.get(m['library_id'], [])
if not isinstance(files, list):
files = [files]
added_media = fireEvent('movie.add', {
'info': info,
'identifier': l.get('identifier'),
'profile_id': profile_id,
'category_id': category_id,
'title': title
}, force_readd = False, search_after = False, update_after = False, notify_after = False, status = status, single = True)
if not added_media:
log.error('Failed adding media %s: %s', (l.get('identifier'), info))
continue
added_media['files'] = added_media.get('files', {})
for f in files:
ffile = all_files[f.get('file_id')]
# Only migrate posters
if ffile.get('type_id') == poster_type.get('id'):
if ffile.get('path') not in added_media['files'].get('image_poster', []) and os.path.isfile(ffile.get('path')):
added_media['files']['image_poster'] = [ffile.get('path')]
break
if 'image_poster' in added_media['files']:
db.update(added_media)
for rel in releases:
empty_info = False
if not rel.get('info'):
empty_info = True
rel['info'] = {}
quality = quality_link.get(rel.get('quality_id'))
if not quality:
continue
release_status = statuses.get(rel.get('status_id')).get('identifier')
if rel['info'].get('download_id'):
status_support = rel['info'].get('download_status_support', False) in [True, 'true', 'True']
rel['info']['download_info'] = {
'id': rel['info'].get('download_id'),
'downloader': rel['info'].get('download_downloader'),
'status_support': status_support,
}
# Add status to keys
rel['info']['status'] = release_status
if not empty_info:
fireEvent('release.create_from_search', [rel['info']], added_media, quality, single = True)
else:
release = {
'_t': 'release',
'identifier': rel.get('identifier'),
'media_id': added_media.get('_id'),
'quality': quality.get('identifier'),
'status': release_status,
'last_edit': int(time.time()),
'files': {}
}
# Add downloader info if provided
try:
release['download_info'] = rel['info']['download_info']
del rel['download_info']
except:
pass
# Add files
release_files = releases_files.get(rel.get('id'), [])
if not isinstance(release_files, list):
release_files = [release_files]
if len(release_files) == 0:
continue
for f in release_files:
rfile = all_files[f.get('file_id')]
file_type = type_by_id.get(rfile.get('type_id')).get('identifier')
if not release['files'].get(file_type):
release['files'][file_type] = []
release['files'][file_type].append(rfile.get('path'))
try:
rls = db.get('release_identifier', rel.get('identifier'), with_doc = True)['doc']
rls.update(release)
db.update(rls)
except:
db.insert(release)
log.info('Total migration took %s', time.time() - migrate_start)
log.info('=' * 30)
# rename old database
log.info('Renaming old database to %s ', old_db + '.old')
os.rename(old_db, old_db + '.old')
if os.path.isfile(old_db + '-wal'):
os.rename(old_db + '-wal', old_db + '-wal.old')
if os.path.isfile(old_db + '-shm'):
os.rename(old_db + '-shm', old_db + '-shm.old')

View File

@@ -1,13 +0,0 @@
config = {
'name': 'download_providers',
'groups': [
{
'label': 'Downloaders',
'description': 'You can select different downloaders for each type (usenet / torrent)',
'type': 'list',
'name': 'download_providers',
'tab': 'downloaders',
'options': [],
},
],
}

View File

@@ -1,118 +0,0 @@
from base64 import b32decode, b16encode
from couchpotato.core.event import addEvent
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.base import Provider
import random
import re
log = CPLog(__name__)
class Downloader(Provider):
type = []
http_time_between_calls = 0
torrent_sources = [
'http://torrage.com/torrent/%s.torrent',
'http://torcache.net/torrent/%s.torrent',
]
torrent_trackers = [
'http://tracker.publicbt.com/announce',
'udp://tracker.istole.it:80/announce',
'udp://fr33domtracker.h33t.com:3310/announce',
'http://tracker.istole.it/announce',
'http://tracker.ccc.de/announce',
'udp://tracker.publicbt.com:80/announce',
'udp://tracker.ccc.de:80/announce',
'http://exodus.desync.com/announce',
'http://exodus.desync.com:6969/announce',
'http://tracker.publichd.eu/announce',
'http://tracker.openbittorrent.com/announce',
]
def __init__(self):
addEvent('download', self._download)
addEvent('download.enabled', self._isEnabled)
addEvent('download.enabled_types', self.getEnabledDownloadType)
addEvent('download.status', self._getAllDownloadStatus)
addEvent('download.remove_failed', self._removeFailed)
def getEnabledDownloadType(self):
for download_type in self.type:
if self.isEnabled(manual = True, data = {'type': download_type}):
return self.type
return []
def _download(self, data = {}, movie = {}, manual = False, filedata = None):
if self.isDisabled(manual, data):
return
return self.download(data = data, movie = movie, filedata = filedata)
def _getAllDownloadStatus(self):
if self.isDisabled(manual = True, data = {}):
return
return self.getAllDownloadStatus()
def getAllDownloadStatus(self):
return
def _removeFailed(self, item):
if self.isDisabled(manual = True, data = {}):
return
if self.conf('delete_failed', default = True):
return self.removeFailed(item)
return False
def removeFailed(self, item):
return
def isCorrectType(self, item_type):
is_correct = item_type in self.type
if not is_correct:
log.debug("Downloader doesn't support this type")
return is_correct
def magnetToTorrent(self, magnet_link):
torrent_hash = re.findall('urn:btih:([\w]{32,40})', magnet_link)[0].upper()
# Convert base 32 to hex
if len(torrent_hash) == 32:
torrent_hash = b16encode(b32decode(torrent_hash))
sources = self.torrent_sources
random.shuffle(sources)
for source in sources:
try:
filedata = self.urlopen(source % torrent_hash, headers = {'Referer': ''}, show_error = False)
if 'torcache' in filedata and 'file not found' in filedata.lower():
continue
return filedata
except:
log.debug('Torrent hash "%s" wasn\'t found on: %s', (torrent_hash, source))
log.error('Failed converting magnet url to torrent: %s', (torrent_hash))
return False
def isDisabled(self, manual, data):
return not self.isEnabled(manual, data)
def _isEnabled(self, manual, data = {}):
if not self.isEnabled(manual, data):
return
return True
def isEnabled(self, manual, data = {}):
d_manual = self.conf('manual', default = False)
return super(Downloader, self).isEnabled() and \
((d_manual and manual) or (d_manual is False)) and \
(not data or self.isCorrectType(data.get('type')))

View File

@@ -0,0 +1,158 @@
from __future__ import with_statement
import os
import traceback
from couchpotato.core._base.downloader.main import DownloaderBase
from couchpotato.core.helpers.encoding import sp
from couchpotato.core.helpers.variable import getDownloadDir
from couchpotato.core.logger import CPLog
from couchpotato.environment import Env
log = CPLog(__name__)
autoload = 'Blackhole'
class Blackhole(DownloaderBase):
protocol = ['nzb', 'torrent', 'torrent_magnet']
status_support = False
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
directory = self.conf('directory')
if not directory or not os.path.isdir(directory):
log.error('No directory set for blackhole %s download.', data.get('protocol'))
else:
try:
if not filedata or len(filedata) < 50:
try:
if data.get('protocol') == 'torrent_magnet':
filedata = self.magnetToTorrent(data.get('url'))
data['protocol'] = 'torrent'
except:
log.error('Failed download torrent via magnet url: %s', traceback.format_exc())
if not filedata or len(filedata) < 50:
log.error('No nzb/torrent available: %s', data.get('url'))
return False
file_name = self.createFileName(data, filedata, media)
full_path = os.path.join(directory, file_name)
if self.conf('create_subdir'):
try:
new_path = os.path.splitext(full_path)[0]
if not os.path.exists(new_path):
os.makedirs(new_path)
full_path = os.path.join(new_path, file_name)
except:
log.error('Couldnt create sub dir, reverting to old one: %s', full_path)
try:
if not os.path.isfile(full_path):
log.info('Downloading %s to %s.', (data.get('protocol'), full_path))
with open(full_path, 'wb') as f:
f.write(filedata)
os.chmod(full_path, Env.getPermission('file'))
return self.downloadReturnId('')
else:
log.info('File %s already exists.', full_path)
return self.downloadReturnId('')
except:
log.error('Failed to download to blackhole %s', traceback.format_exc())
pass
except:
log.info('Failed to download file %s: %s', (data.get('name'), traceback.format_exc()))
return False
return False
def test(self):
directory = self.conf('directory')
if directory and os.path.isdir(directory):
test_file = sp(os.path.join(directory, 'couchpotato_test.txt'))
# Check if folder is writable
self.createFile(test_file, 'This is a test file')
if os.path.isfile(test_file):
os.remove(test_file)
return True
return False
def getEnabledProtocol(self):
if self.conf('use_for') == 'both':
return super(Blackhole, self).getEnabledProtocol()
elif self.conf('use_for') == 'torrent':
return ['torrent', 'torrent_magnet']
else:
return ['nzb']
def isEnabled(self, manual = False, data = None):
if not data: data = {}
for_protocol = ['both']
if data and 'torrent' in data.get('protocol'):
for_protocol.append('torrent')
elif data:
for_protocol.append(data.get('protocol'))
return super(Blackhole, self).isEnabled(manual, data) and \
((self.conf('use_for') in for_protocol))
config = [{
'name': 'blackhole',
'order': 30,
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'blackhole',
'label': 'Black hole',
'description': 'Download the NZB/Torrent to a specific folder. <em>Note: Seeding and copying/linking features do <strong>not</strong> work with Black hole</em>.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': True,
'type': 'enabler',
'radio_group': 'nzb,torrent',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Directory where the .nzb (or .torrent) file is saved to.',
'default': getDownloadDir()
},
{
'name': 'use_for',
'label': 'Use for',
'default': 'both',
'type': 'dropdown',
'values': [('usenet & torrents', 'both'), ('usenet', 'nzb'), ('torrent', 'torrent')],
},
{
'name': 'create_subdir',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Create a sub directory when saving the .nzb (or .torrent).',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,48 +0,0 @@
from .main import Blackhole
from couchpotato.core.helpers.variable import getDownloadDir
def start():
return Blackhole()
config = [{
'name': 'blackhole',
'order': 30,
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'blackhole',
'label': 'Black hole',
'description': 'Download the NZB/Torrent to a specific folder.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': True,
'type': 'enabler',
'radio_group': 'nzb,torrent',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Directory where the .nzb (or .torrent) file is saved to.',
'default': getDownloadDir()
},
{
'name': 'use_for',
'label': 'Use for',
'default': 'both',
'type': 'dropdown',
'values': [('usenet & torrents', 'both'), ('usenet', 'nzb'), ('torrent', 'torrent')],
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,70 +0,0 @@
from __future__ import with_statement
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.logger import CPLog
import os
import traceback
log = CPLog(__name__)
class Blackhole(Downloader):
type = ['nzb', 'torrent', 'torrent_magnet']
def download(self, data = {}, movie = {}, filedata = None):
directory = self.conf('directory')
if not directory or not os.path.isdir(directory):
log.error('No directory set for blackhole %s download.', data.get('type'))
else:
try:
if not filedata or len(filedata) < 50:
try:
if data.get('type') == 'torrent_magnet':
filedata = self.magnetToTorrent(data.get('url'))
data['type'] = 'torrent'
except:
log.error('Failed download torrent via magnet url: %s', traceback.format_exc())
if not filedata or len(filedata) < 50:
log.error('No nzb/torrent available: %s', data.get('url'))
return False
fullPath = os.path.join(directory, self.createFileName(data, filedata, movie))
try:
if not os.path.isfile(fullPath):
log.info('Downloading %s to %s.', (data.get('type'), fullPath))
with open(fullPath, 'wb') as f:
f.write(filedata)
return True
else:
log.info('File %s already exists.', fullPath)
return True
except:
log.error('Failed to download to blackhole %s', traceback.format_exc())
pass
except:
log.info('Failed to download file %s: %s', (data.get('name'), traceback.format_exc()))
return False
return False
def getEnabledDownloadType(self):
if self.conf('use_for') == 'both':
return super(Blackhole, self).getEnabledDownloadType()
elif self.conf('use_for') == 'torrent':
return ['torrent', 'torrent_magnet']
else:
return ['nzb']
def isEnabled(self, manual, data = {}):
for_type = ['both']
if data and 'torrent' in data.get('type'):
for_type.append('torrent')
elif data:
for_type.append(data.get('type'))
return super(Blackhole, self).isEnabled(manual, data) and \
((self.conf('use_for') in for_type))

View File

@@ -0,0 +1,384 @@
from base64 import b64encode, b16encode, b32decode
from datetime import timedelta
from hashlib import sha1
import os.path
import re
import traceback
from bencode import bencode as benc, bdecode
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.helpers.encoding import isInt, sp
from couchpotato.core.helpers.variable import tryFloat, cleanHost
from couchpotato.core.logger import CPLog
from synchronousdeluge import DelugeClient
log = CPLog(__name__)
autoload = 'Deluge'
class Deluge(DownloaderBase):
protocol = ['torrent', 'torrent_magnet']
log = CPLog(__name__)
drpc = None
def connect(self, reconnect = False):
# Load host from config and split out port.
host = cleanHost(self.conf('host'), protocol = False).split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
if not self.drpc or reconnect:
self.drpc = DelugeRPC(host[0], port = host[1], username = self.conf('username'), password = self.conf('password'))
return self.drpc
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
log.info('Sending "%s" (%s) to Deluge.', (data.get('name'), data.get('protocol')))
if not self.connect():
return False
if not filedata and data.get('protocol') == 'torrent':
log.error('Failed sending torrent, no data')
return False
# Set parameters for Deluge
options = {
'add_paused': self.conf('paused', default = 0),
'label': self.conf('label')
}
if self.conf('directory'):
if os.path.isdir(self.conf('directory')):
options['download_location'] = self.conf('directory')
else:
log.error('Download directory from Deluge settings: %s doesn\'t exist', self.conf('directory'))
if self.conf('completed_directory'):
if os.path.isdir(self.conf('completed_directory')):
options['move_completed'] = 1
options['move_completed_path'] = self.conf('completed_directory')
else:
log.error('Download directory from Deluge settings: %s doesn\'t exist', self.conf('directory'))
if data.get('seed_ratio'):
options['stop_at_ratio'] = 1
options['stop_ratio'] = tryFloat(data.get('seed_ratio'))
# Deluge only has seed time as a global option. Might be added in
# in a future API release.
# if data.get('seed_time'):
# Send request to Deluge
if data.get('protocol') == 'torrent_magnet':
remote_torrent = self.drpc.add_torrent_magnet(data.get('url'), options)
else:
filename = self.createFileName(data, filedata, media)
remote_torrent = self.drpc.add_torrent_file(filename, filedata, options)
if not remote_torrent:
log.error('Failed sending torrent to Deluge')
return False
log.info('Torrent sent to Deluge successfully.')
return self.downloadReturnId(remote_torrent)
def test(self):
if self.connect(True) and self.drpc.test():
return True
return False
def getAllDownloadStatus(self, ids):
log.debug('Checking Deluge download status.')
if not self.connect():
return []
release_downloads = ReleaseDownloadList(self)
queue = self.drpc.get_alltorrents(ids)
if not queue:
log.debug('Nothing in queue or error')
return []
for torrent_id in queue:
torrent = queue[torrent_id]
if not 'hash' in torrent:
# When given a list of ids, deluge will return an empty item for a non-existant torrent.
continue
log.debug('name=%s / id=%s / save_path=%s / move_on_completed=%s / move_completed_path=%s / hash=%s / progress=%s / state=%s / eta=%s / ratio=%s / stop_ratio=%s / is_seed=%s / is_finished=%s / paused=%s', (torrent['name'], torrent['hash'], torrent['save_path'], torrent['move_on_completed'], torrent['move_completed_path'], torrent['hash'], torrent['progress'], torrent['state'], torrent['eta'], torrent['ratio'], torrent['stop_ratio'], torrent['is_seed'], torrent['is_finished'], torrent['paused']))
# Deluge has no easy way to work out if a torrent is stalled or failing.
#status = 'failed'
status = 'busy'
if torrent['is_seed'] and tryFloat(torrent['ratio']) < tryFloat(torrent['stop_ratio']):
# We have torrent['seeding_time'] to work out what the seeding time is, but we do not
# have access to the downloader seed_time, as with deluge we have no way to pass it
# when the torrent is added. So Deluge will only look at the ratio.
# See above comment in download().
status = 'seeding'
elif torrent['is_seed'] and torrent['is_finished'] and torrent['paused'] and torrent['state'] == 'Paused':
status = 'completed'
download_dir = sp(torrent['save_path'])
if torrent['move_on_completed']:
download_dir = torrent['move_completed_path']
torrent_files = []
for file_item in torrent['files']:
torrent_files.append(sp(os.path.join(download_dir, file_item['path'])))
release_downloads.append({
'id': torrent['hash'],
'name': torrent['name'],
'status': status,
'original_status': torrent['state'],
'seed_ratio': torrent['ratio'],
'timeleft': str(timedelta(seconds = torrent['eta'])),
'folder': sp(download_dir if len(torrent_files) == 1 else os.path.join(download_dir, torrent['name'])),
'files': torrent_files,
})
return release_downloads
def pause(self, release_download, pause = True):
if pause:
return self.drpc.pause_torrent([release_download['id']])
else:
return self.drpc.resume_torrent([release_download['id']])
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
return self.drpc.remove_torrent(release_download['id'], True)
def processComplete(self, release_download, delete_files = False):
log.debug('Requesting Deluge to remove the torrent %s%s.', (release_download['name'], ' and cleanup the downloaded files' if delete_files else ''))
return self.drpc.remove_torrent(release_download['id'], remove_local_data = delete_files)
class DelugeRPC(object):
host = 'localhost'
port = 58846
username = None
password = None
client = None
def __init__(self, host = 'localhost', port = 58846, username = None, password = None):
super(DelugeRPC, self).__init__()
self.host = host
self.port = port
self.username = username
self.password = password
def connect(self):
self.client = DelugeClient()
self.client.connect(self.host, int(self.port), self.username, self.password)
def test(self):
try:
self.connect()
except:
return False
return True
def add_torrent_magnet(self, torrent, options):
torrent_id = False
try:
self.connect()
torrent_id = self.client.core.add_torrent_magnet(torrent, options).get()
if not torrent_id:
torrent_id = self._check_torrent(True, torrent)
if torrent_id and options['label']:
self.client.label.set_torrent(torrent_id, options['label']).get()
except Exception as err:
log.error('Failed to add torrent magnet %s: %s %s', (torrent, err, traceback.format_exc()))
finally:
if self.client:
self.disconnect()
return torrent_id
def add_torrent_file(self, filename, torrent, options):
torrent_id = False
try:
self.connect()
torrent_id = self.client.core.add_torrent_file(filename, b64encode(torrent), options).get()
if not torrent_id:
torrent_id = self._check_torrent(False, torrent)
if torrent_id and options['label']:
self.client.label.set_torrent(torrent_id, options['label']).get()
except Exception as err:
log.error('Failed to add torrent file %s: %s %s', (filename, err, traceback.format_exc()))
finally:
if self.client:
self.disconnect()
return torrent_id
def get_alltorrents(self, ids):
ret = False
try:
self.connect()
ret = self.client.core.get_torrents_status({'id': ids}, ('name', 'hash', 'save_path', 'move_completed_path', 'progress', 'state', 'eta', 'ratio', 'stop_ratio', 'is_seed', 'is_finished', 'paused', 'move_on_completed', 'files')).get()
except Exception as err:
log.error('Failed to get all torrents: %s %s', (err, traceback.format_exc()))
finally:
if self.client:
self.disconnect()
return ret
def pause_torrent(self, torrent_ids):
try:
self.connect()
self.client.core.pause_torrent(torrent_ids).get()
except Exception as err:
log.error('Failed to pause torrent: %s %s', (err, traceback.format_exc()))
finally:
if self.client:
self.disconnect()
def resume_torrent(self, torrent_ids):
try:
self.connect()
self.client.core.resume_torrent(torrent_ids).get()
except Exception as err:
log.error('Failed to resume torrent: %s %s', (err, traceback.format_exc()))
finally:
if self.client:
self.disconnect()
def remove_torrent(self, torrent_id, remove_local_data):
ret = False
try:
self.connect()
ret = self.client.core.remove_torrent(torrent_id, remove_local_data).get()
except Exception as err:
log.error('Failed to remove torrent: %s %s', (err, traceback.format_exc()))
finally:
if self.client:
self.disconnect()
return ret
def disconnect(self):
self.client.disconnect()
def _check_torrent(self, magnet, torrent):
# Torrent not added, check if it already existed.
if magnet:
torrent_hash = re.findall('urn:btih:([\w]{32,40})', torrent)[0]
else:
info = bdecode(torrent)["info"]
torrent_hash = sha1(benc(info)).hexdigest()
# Convert base 32 to hex
if len(torrent_hash) == 32:
torrent_hash = b16encode(b32decode(torrent_hash))
torrent_hash = torrent_hash.lower()
torrent_check = self.client.core.get_torrent_status(torrent_hash, {}).get()
if torrent_check['hash']:
return torrent_hash
return False
config = [{
'name': 'deluge',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'deluge',
'label': 'Deluge',
'description': 'Use <a href="http://www.deluge-torrent.org/" target="_blank">Deluge</a> to download torrents.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:58846',
'description': 'Hostname with port. Usually <strong>localhost:58846</strong>',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Download to this directory. Keep empty for default Deluge download directory.',
},
{
'name': 'completed_directory',
'type': 'directory',
'description': 'Move completed torrent to this directory. Keep empty for default Deluge options.',
'advanced': True,
},
{
'name': 'label',
'description': 'Label to add to torrents in the Deluge UI.',
},
{
'name': 'remove_complete',
'label': 'Remove torrent',
'type': 'bool',
'default': True,
'advanced': True,
'description': 'Remove the torrent from Deluge after it has finished seeding.',
},
{
'name': 'delete_files',
'label': 'Remove files',
'default': True,
'type': 'bool',
'advanced': True,
'description': 'Also remove the leftover files.',
},
{
'name': 'paused',
'type': 'bool',
'advanced': True,
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -0,0 +1,293 @@
from base64 import standard_b64encode
from datetime import timedelta
import re
import shutil
import socket
import traceback
import xmlrpclib
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.helpers.encoding import ss, sp
from couchpotato.core.helpers.variable import tryInt, md5, cleanHost
from couchpotato.core.logger import CPLog
log = CPLog(__name__)
autoload = 'NZBGet'
class NZBGet(DownloaderBase):
protocol = ['nzb']
rpc = 'xmlrpc'
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
if not filedata:
log.error('Unable to get NZB file: %s', traceback.format_exc())
return False
log.info('Sending "%s" to NZBGet.', data.get('name'))
nzb_name = ss('%s.nzb' % self.createNzbName(data, media))
rpc = self.getRPC()
try:
if rpc.writelog('INFO', 'CouchPotato connected to drop off %s.' % nzb_name):
log.debug('Successfully connected to NZBGet')
else:
log.info('Successfully connected to NZBGet, but unable to send a message')
except socket.error:
log.error('NZBGet is not responding. Please ensure that NZBGet is running and host setting is correct.')
return False
except xmlrpclib.ProtocolError as e:
if e.errcode == 401:
log.error('Password is incorrect.')
else:
log.error('Protocol Error: %s', e)
return False
if re.search(r"^0", rpc.version()):
xml_response = rpc.append(nzb_name, self.conf('category'), False, standard_b64encode(filedata.strip()))
else:
xml_response = rpc.append(nzb_name, self.conf('category'), tryInt(self.conf('priority')), False, standard_b64encode(filedata.strip()))
if xml_response:
log.info('NZB sent successfully to NZBGet')
nzb_id = md5(data['url']) # about as unique as they come ;)
couchpotato_id = "couchpotato=" + nzb_id
groups = rpc.listgroups()
file_id = [item['LastID'] for item in groups if item['NZBFilename'] == nzb_name]
confirmed = rpc.editqueue("GroupSetParameter", 0, couchpotato_id, file_id)
if confirmed:
log.debug('couchpotato parameter set in nzbget download')
return self.downloadReturnId(nzb_id)
else:
log.error('NZBGet could not add %s to the queue.', nzb_name)
return False
def test(self):
rpc = self.getRPC()
try:
if rpc.writelog('INFO', 'CouchPotato connected to test connection'):
log.debug('Successfully connected to NZBGet')
else:
log.info('Successfully connected to NZBGet, but unable to send a message')
except socket.error:
log.error('NZBGet is not responding. Please ensure that NZBGet is running and host setting is correct.')
return False
except xmlrpclib.ProtocolError as e:
if e.errcode == 401:
log.error('Password is incorrect.')
else:
log.error('Protocol Error: %s', e)
return False
return True
def getAllDownloadStatus(self, ids):
log.debug('Checking NZBGet download status.')
rpc = self.getRPC()
try:
if rpc.writelog('INFO', 'CouchPotato connected to check status'):
log.debug('Successfully connected to NZBGet')
else:
log.info('Successfully connected to NZBGet, but unable to send a message')
except socket.error:
log.error('NZBGet is not responding. Please ensure that NZBGet is running and host setting is correct.')
return []
except xmlrpclib.ProtocolError as e:
if e.errcode == 401:
log.error('Password is incorrect.')
else:
log.error('Protocol Error: %s', e)
return []
# Get NZBGet data
try:
status = rpc.status()
groups = rpc.listgroups()
queue = rpc.postqueue(0)
history = rpc.history()
except:
log.error('Failed getting data: %s', traceback.format_exc(1))
return []
release_downloads = ReleaseDownloadList(self)
for nzb in groups:
try:
nzb_id = [param['Value'] for param in nzb['Parameters'] if param['Name'] == 'couchpotato'][0]
except:
nzb_id = nzb['NZBID']
if nzb_id in ids:
log.debug('Found %s in NZBGet download queue', nzb['NZBFilename'])
timeleft = -1
try:
if nzb['ActiveDownloads'] > 0 and nzb['DownloadRate'] > 0 and not (status['DownloadPaused'] or status['Download2Paused']):
timeleft = str(timedelta(seconds = nzb['RemainingSizeMB'] / status['DownloadRate'] * 2 ^ 20))
except:
pass
release_downloads.append({
'id': nzb_id,
'name': nzb['NZBFilename'],
'original_status': 'DOWNLOADING' if nzb['ActiveDownloads'] > 0 else 'QUEUED',
# Seems to have no native API function for time left. This will return the time left after NZBGet started downloading this item
'timeleft': timeleft,
})
for nzb in queue: # 'Parameters' is not passed in rpc.postqueue
if nzb['NZBID'] in ids:
log.debug('Found %s in NZBGet postprocessing queue', nzb['NZBFilename'])
release_downloads.append({
'id': nzb['NZBID'],
'name': nzb['NZBFilename'],
'original_status': nzb['Stage'],
'timeleft': str(timedelta(seconds = 0)) if not status['PostPaused'] else -1,
})
for nzb in history:
try:
nzb_id = [param['Value'] for param in nzb['Parameters'] if param['Name'] == 'couchpotato'][0]
except:
nzb_id = nzb['NZBID']
if nzb_id in ids:
log.debug('Found %s in NZBGet history. ParStatus: %s, ScriptStatus: %s, Log: %s', (nzb['NZBFilename'] , nzb['ParStatus'], nzb['ScriptStatus'] , nzb['Log']))
release_downloads.append({
'id': nzb_id,
'name': nzb['NZBFilename'],
'status': 'completed' if nzb['ParStatus'] in ['SUCCESS', 'NONE'] and nzb['ScriptStatus'] in ['SUCCESS', 'NONE'] else 'failed',
'original_status': nzb['ParStatus'] + ', ' + nzb['ScriptStatus'],
'timeleft': str(timedelta(seconds = 0)),
'folder': sp(nzb['DestDir'])
})
return release_downloads
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
rpc = self.getRPC()
try:
if rpc.writelog('INFO', 'CouchPotato connected to delete some history'):
log.debug('Successfully connected to NZBGet')
else:
log.info('Successfully connected to NZBGet, but unable to send a message')
except socket.error:
log.error('NZBGet is not responding. Please ensure that NZBGet is running and host setting is correct.')
return False
except xmlrpclib.ProtocolError as e:
if e.errcode == 401:
log.error('Password is incorrect.')
else:
log.error('Protocol Error: %s', e)
return False
try:
history = rpc.history()
nzb_id = None
path = None
for hist in history:
for param in hist['Parameters']:
if param['Name'] == 'couchpotato' and param['Value'] == release_download['id']:
nzb_id = hist['ID']
path = hist['DestDir']
if nzb_id and path and rpc.editqueue('HistoryDelete', 0, "", [tryInt(nzb_id)]):
shutil.rmtree(path, True)
except:
log.error('Failed deleting: %s', traceback.format_exc(0))
return False
return True
def getRPC(self):
url = cleanHost(host = self.conf('host'), ssl = self.conf('ssl'), username = self.conf('username'), password = self.conf('password')) + self.rpc
return xmlrpclib.ServerProxy(url)
config = [{
'name': 'nzbget',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'nzbget',
'label': 'NZBGet',
'description': 'Use <a href="http://nzbget.sourceforge.net/Main_Page" target="_blank">NZBGet</a> to download NZBs.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'localhost:6789',
'description': 'Hostname with port. Usually <strong>localhost:6789</strong>',
},
{
'name': 'ssl',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Use HyperText Transfer Protocol Secure, or <strong>https</strong>',
},
{
'name': 'username',
'default': 'nzbget',
'advanced': True,
'description': 'Set a different username to connect. Default: nzbget',
},
{
'name': 'password',
'type': 'password',
'description': 'Default NZBGet password is <i>tegbzn6789</i>',
},
{
'name': 'category',
'default': 'Movies',
'description': 'The category CP places the nzb in. Like <strong>movies</strong> or <strong>couchpotato</strong>',
},
{
'name': 'priority',
'advanced': True,
'default': '0',
'type': 'dropdown',
'values': [('Very Low', -100), ('Low', -50), ('Normal', 0), ('High', 50), ('Very High', 100)],
'description': 'Only change this if you are using NZBget 9.0 or higher',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -1,54 +0,0 @@
from .main import NZBGet
def start():
return NZBGet()
config = [{
'name': 'nzbget',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'nzbget',
'label': 'NZBGet',
'description': 'Use <a href="http://nzbget.sourceforge.net/Main_Page" target="_blank">NZBGet</a> to download NZBs.',
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'localhost:6789',
'description': 'Hostname with port. Usually <strong>localhost:6789</strong>',
},
{
'name': 'password',
'type': 'password',
'description': 'Default NZBGet password is <i>tegbzn6789</i>',
},
{
'name': 'category',
'default': 'Movies',
'description': 'The category CP places the nzb in. Like <strong>movies</strong> or <strong>couchpotato</strong>',
},
{
'name': 'priority',
'default': '0',
'type': 'dropdown',
'values': [('Very Low', -100), ('Low', -50), ('Normal', 0), ('High', 50), ('Very High', 100)],
'description': 'Only change this if you are using NZBget 9.0 or higher',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,56 +0,0 @@
from base64 import standard_b64encode
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import ss
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
import re
import socket
import traceback
import xmlrpclib
log = CPLog(__name__)
class NZBGet(Downloader):
type = ['nzb']
url = 'http://nzbget:%(password)s@%(host)s/xmlrpc'
def download(self, data = {}, movie = {}, filedata = None):
if not filedata:
log.error('Unable to get NZB file: %s', traceback.format_exc())
return False
log.info('Sending "%s" to NZBGet.', data.get('name'))
url = self.url % {'host': self.conf('host'), 'password': self.conf('password')}
nzb_name = ss('%s.nzb' % self.createNzbName(data, movie))
rpc = xmlrpclib.ServerProxy(url)
try:
if rpc.writelog('INFO', 'CouchPotato connected to drop off %s.' % nzb_name):
log.info('Successfully connected to NZBGet')
else:
log.info('Successfully connected to NZBGet, but unable to send a message')
except socket.error:
log.error('NZBGet is not responding. Please ensure that NZBGet is running and host setting is correct.')
return False
except xmlrpclib.ProtocolError, e:
if e.errcode == 401:
log.error('Password is incorrect.')
else:
log.error('Protocol Error: %s', e)
return False
if re.search(r"^0", rpc.version()):
xml_response = rpc.append(nzb_name, self.conf('category'), False, standard_b64encode(filedata.strip()))
else:
xml_response = rpc.append(nzb_name, self.conf('category'), tryInt(self.conf('priority')), False, standard_b64encode(filedata.strip()))
if xml_response:
log.info('NZB sent successfully to NZBGet')
return True
else:
log.error('NZBGet could not add %s to the queue.', nzb_name)
return False

View File

@@ -0,0 +1,245 @@
from base64 import b64encode
from urllib2 import URLError
from uuid import uuid4
import hashlib
import httplib
import json
import os
import socket
import ssl
import sys
import time
import traceback
import urllib2
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.helpers.encoding import tryUrlencode, sp
from couchpotato.core.helpers.variable import cleanHost
from couchpotato.core.logger import CPLog
log = CPLog(__name__)
autoload = 'NZBVortex'
class NZBVortex(DownloaderBase):
protocol = ['nzb']
api_level = None
session_id = None
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
# Send the nzb
try:
nzb_filename = self.createFileName(data, filedata, media)
self.call('nzb/add', files = {'file': (nzb_filename, filedata)})
time.sleep(10)
raw_statuses = self.call('nzb')
nzb_id = [nzb['id'] for nzb in raw_statuses.get('nzbs', []) if os.path.basename(nzb['nzbFileName']) == nzb_filename][0]
return self.downloadReturnId(nzb_id)
except:
log.error('Something went wrong sending the NZB file: %s', traceback.format_exc())
return False
def test(self):
try:
login_result = self.login()
except:
return False
return login_result
def getAllDownloadStatus(self, ids):
raw_statuses = self.call('nzb')
release_downloads = ReleaseDownloadList(self)
for nzb in raw_statuses.get('nzbs', []):
if nzb['id'] in ids:
# Check status
status = 'busy'
if nzb['state'] == 20:
status = 'completed'
elif nzb['state'] in [21, 22, 24]:
status = 'failed'
release_downloads.append({
'id': nzb['id'],
'name': nzb['uiTitle'],
'status': status,
'original_status': nzb['state'],
'timeleft': -1,
'folder': sp(nzb['destinationPath']),
})
return release_downloads
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
try:
self.call('nzb/%s/cancel' % release_download['id'])
except:
log.error('Failed deleting: %s', traceback.format_exc(0))
return False
return True
def login(self):
nonce = self.call('auth/nonce', auth = False).get('authNonce')
cnonce = uuid4().hex
hashed = b64encode(hashlib.sha256('%s:%s:%s' % (nonce, cnonce, self.conf('api_key'))).digest())
params = {
'nonce': nonce,
'cnonce': cnonce,
'hash': hashed
}
login_data = self.call('auth/login', parameters = params, auth = False)
# Save for later
if login_data.get('loginResult') == 'successful':
self.session_id = login_data.get('sessionID')
return True
log.error('Login failed, please check you api-key')
return False
def call(self, call, parameters = None, repeat = False, auth = True, *args, **kwargs):
# Login first
if not parameters: parameters = {}
if not self.session_id and auth:
self.login()
# Always add session id to request
if self.session_id:
parameters['sessionid'] = self.session_id
params = tryUrlencode(parameters)
url = cleanHost(self.conf('host'), ssl = self.conf('ssl')) + 'api/' + call
try:
data = self.urlopen('%s?%s' % (url, params), *args, **kwargs)
if data:
return json.loads(data)
except URLError as e:
if hasattr(e, 'code') and e.code == 403:
# Try login and do again
if not repeat:
self.login()
return self.call(call, parameters = parameters, repeat = True, **kwargs)
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return {}
def getApiLevel(self):
if not self.api_level:
url = cleanHost(self.conf('host')) + 'api/app/apilevel'
try:
data = self.urlopen(url, show_error = False)
self.api_level = float(json.loads(data).get('apilevel'))
except URLError as e:
if hasattr(e, 'code') and e.code == 403:
log.error('This version of NZBVortex isn\'t supported. Please update to 2.8.6 or higher')
else:
log.error('NZBVortex doesn\'t seem to be running or maybe the remote option isn\'t enabled yet: %s', traceback.format_exc(1))
return self.api_level
def isEnabled(self, manual = False, data = None):
if not data: data = {}
return super(NZBVortex, self).isEnabled(manual, data) and self.getApiLevel()
class HTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
def connect(self):
sock = socket.create_connection((self.host, self.port), self.timeout)
if sys.version_info < (2, 6, 7):
if hasattr(self, '_tunnel_host'):
self.sock = sock
self._tunnel()
else:
if self._tunnel_host:
self.sock = sock
self._tunnel()
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ssl_version = ssl.PROTOCOL_TLSv1)
class HTTPSHandler(urllib2.HTTPSHandler):
def https_open(self, req):
return self.do_open(HTTPSConnection, req)
config = [{
'name': 'nzbvortex',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'nzbvortex',
'label': 'NZBVortex',
'description': 'Use <a href="http://www.nzbvortex.com/landing/" target="_blank">NZBVortex</a> to download NZBs.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'localhost:4321',
'description': 'Hostname with port. Usually <strong>localhost:4321</strong>',
},
{
'name': 'ssl',
'default': 1,
'type': 'bool',
'advanced': True,
'description': 'Use HyperText Transfer Protocol Secure, or <strong>https</strong>',
},
{
'name': 'api_key',
'label': 'Api Key',
},
{
'name': 'manual',
'default': False,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -1,47 +0,0 @@
from .main import NZBVortex
def start():
return NZBVortex()
config = [{
'name': 'nzbvortex',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'nzbvortex',
'label': 'NZBVortex',
'description': 'Use <a href="http://www.nzbvortex.com/landing/" target="_blank">NZBVortex</a> to download NZBs.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'https://localhost:4321',
},
{
'name': 'api_key',
'label': 'Api Key',
},
{
'name': 'manual',
'default': False,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -1,170 +0,0 @@
from base64 import b64encode
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import tryUrlencode, ss
from couchpotato.core.helpers.variable import cleanHost
from couchpotato.core.logger import CPLog
from urllib2 import URLError
from uuid import uuid4
import hashlib
import httplib
import json
import socket
import ssl
import sys
import traceback
import urllib2
log = CPLog(__name__)
class NZBVortex(Downloader):
type = ['nzb']
api_level = None
session_id = None
def download(self, data = {}, movie = {}, filedata = None):
# Send the nzb
try:
nzb_filename = self.createFileName(data, filedata, movie)
self.call('nzb/add', params = {'file': (ss(nzb_filename), filedata)}, multipart = True)
return True
except:
log.error('Something went wrong sending the NZB file: %s', traceback.format_exc())
return False
def getAllDownloadStatus(self):
raw_statuses = self.call('nzb')
statuses = []
for item in raw_statuses.get('nzbs', []):
# Check status
status = 'busy'
if item['state'] == 20:
status = 'completed'
elif item['state'] in [21, 22, 24]:
status = 'failed'
statuses.append({
'id': item['id'],
'name': item['uiTitle'],
'status': status,
'original_status': item['state'],
'timeleft':-1,
})
return statuses
def removeFailed(self, item):
log.info('%s failed downloading, deleting...', item['name'])
try:
self.call('nzb/%s/cancel' % item['id'])
except:
log.error('Failed deleting: %s', traceback.format_exc(0))
return False
return True
def login(self):
nonce = self.call('auth/nonce', auth = False).get('authNonce')
cnonce = uuid4().hex
hashed = b64encode(hashlib.sha256('%s:%s:%s' % (nonce, cnonce, self.conf('api_key'))).digest())
params = {
'nonce': nonce,
'cnonce': cnonce,
'hash': hashed
}
login_data = self.call('auth/login', parameters = params, auth = False)
# Save for later
if login_data.get('loginResult') == 'successful':
self.session_id = login_data.get('sessionID')
return True
log.error('Login failed, please check you api-key')
return False
def call(self, call, parameters = {}, repeat = False, auth = True, *args, **kwargs):
# Login first
if not self.session_id and auth:
self.login()
# Always add session id to request
if self.session_id:
parameters['sessionid'] = self.session_id
params = tryUrlencode(parameters)
url = cleanHost(self.conf('host')) + 'api/' + call
url_opener = urllib2.build_opener(HTTPSHandler())
try:
data = self.urlopen('%s?%s' % (url, params), opener = url_opener, *args, **kwargs)
if data:
return json.loads(data)
except URLError, e:
if hasattr(e, 'code') and e.code == 403:
# Try login and do again
if not repeat:
self.login()
return self.call(call, parameters = parameters, repeat = True, *args, **kwargs)
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return {}
def getApiLevel(self):
if not self.api_level:
url = cleanHost(self.conf('host')) + 'api/app/apilevel'
url_opener = urllib2.build_opener(HTTPSHandler())
try:
data = self.urlopen(url, opener = url_opener, show_error = False)
self.api_level = float(json.loads(data).get('apilevel'))
except URLError, e:
if hasattr(e, 'code') and e.code == 403:
log.error('This version of NZBVortex isn\'t supported. Please update to 2.8.6 or higher')
else:
log.error('NZBVortex doesn\'t seem to be running or maybe the remote option isn\'t enabled yet: %s', traceback.format_exc(1))
return self.api_level
def isEnabled(self, manual, data):
return super(NZBVortex, self).isEnabled(manual, data) and self.getApiLevel()
class HTTPSConnection(httplib.HTTPSConnection):
def __init__(self, *args, **kwargs):
httplib.HTTPSConnection.__init__(self, *args, **kwargs)
def connect(self):
sock = socket.create_connection((self.host, self.port), self.timeout)
if sys.version_info < (2, 6, 7):
if hasattr(self, '_tunnel_host'):
self.sock = sock
self._tunnel()
else:
if self._tunnel_host:
self.sock = sock
self._tunnel()
self.sock = ssl.wrap_socket(sock, self.key_file, self.cert_file, ssl_version = ssl.PROTOCOL_TLSv1)
class HTTPSHandler(urllib2.HTTPSHandler):
def https_open(self, req):
return self.do_open(HTTPSConnection, req)

View File

@@ -0,0 +1,111 @@
from __future__ import with_statement
import os
import traceback
from couchpotato.core._base.downloader.main import DownloaderBase
from couchpotato.core.helpers.encoding import sp
from couchpotato.core.logger import CPLog
log = CPLog(__name__)
autoload = 'Pneumatic'
class Pneumatic(DownloaderBase):
protocol = ['nzb']
strm_syntax = 'plugin://plugin.program.pneumatic/?mode=strm&type=add_file&nzb=%s&nzbname=%s'
status_support = False
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
directory = self.conf('directory')
if not directory or not os.path.isdir(directory):
log.error('No directory set for .strm downloads.')
else:
try:
if not filedata or len(filedata) < 50:
log.error('No nzb available!')
return False
full_path = os.path.join(directory, self.createFileName(data, filedata, media))
try:
if not os.path.isfile(full_path):
log.info('Downloading %s to %s.', (data.get('protocol'), full_path))
with open(full_path, 'wb') as f:
f.write(filedata)
nzb_name = self.createNzbName(data, media)
strm_path = os.path.join(directory, nzb_name)
strm_file = open(strm_path + '.strm', 'wb')
strmContent = self.strm_syntax % (full_path, nzb_name)
strm_file.write(strmContent)
strm_file.close()
return self.downloadReturnId('')
else:
log.info('File %s already exists.', full_path)
return self.downloadReturnId('')
except:
log.error('Failed to download .strm: %s', traceback.format_exc())
pass
except:
log.info('Failed to download file %s: %s', (data.get('name'), traceback.format_exc()))
return False
return False
def test(self):
directory = self.conf('directory')
if directory and os.path.isdir(directory):
test_file = sp(os.path.join(directory, 'couchpotato_test.txt'))
# Check if folder is writable
self.createFile(test_file, 'This is a test file')
if os.path.isfile(test_file):
os.remove(test_file)
return True
return False
config = [{
'name': 'pneumatic',
'order': 30,
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'pneumatic',
'label': 'Pneumatic',
'description': 'Use <a href="http://forum.xbmc.org/showthread.php?tid=97657" target="_blank">Pneumatic</a> to download .strm files.',
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Directory where the .strm file is saved to.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,37 +0,0 @@
from .main import Pneumatic
def start():
return Pneumatic()
config = [{
'name': 'pneumatic',
'order': 30,
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'pneumatic',
'label': 'Pneumatic',
'description': 'Use <a href="http://forum.xbmc.org/showthread.php?tid=97657" target="_blank">Pneumatic</a> to download .strm files.',
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Directory where the .strm file is saved to.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,54 +0,0 @@
from __future__ import with_statement
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.logger import CPLog
import os
import traceback
log = CPLog(__name__)
class Pneumatic(Downloader):
type = ['nzb']
strm_syntax = 'plugin://plugin.program.pneumatic/?mode=strm&type=add_file&nzb=%s&nzbname=%s'
def download(self, data = {}, movie = {}, filedata = None):
directory = self.conf('directory')
if not directory or not os.path.isdir(directory):
log.error('No directory set for .strm downloads.')
else:
try:
if not filedata or len(filedata) < 50:
log.error('No nzb available!')
return False
fullPath = os.path.join(directory, self.createFileName(data, filedata, movie))
try:
if not os.path.isfile(fullPath):
log.info('Downloading %s to %s.', (data.get('type'), fullPath))
with open(fullPath, 'wb') as f:
f.write(filedata)
nzb_name = self.createNzbName(data, movie)
strm_path = os.path.join(directory, nzb_name)
strm_file = open(strm_path + '.strm', 'wb')
strmContent = self.strm_syntax % (fullPath, nzb_name)
strm_file.write(strmContent)
strm_file.close()
return True
else:
log.info('File %s already exists.', fullPath)
return True
except:
log.error('Failed to download .strm: %s', traceback.format_exc())
pass
except:
log.info('Failed to download file %s: %s', (data.get('name'), traceback.format_exc()))
return False
return False

View File

@@ -0,0 +1,245 @@
from base64 import b16encode, b32decode
from hashlib import sha1
import os
from bencode import bencode, bdecode
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.helpers.encoding import sp
from couchpotato.core.helpers.variable import cleanHost
from couchpotato.core.logger import CPLog
from qbittorrent.client import QBittorrentClient
log = CPLog(__name__)
autoload = 'qBittorrent'
class qBittorrent(DownloaderBase):
protocol = ['torrent', 'torrent_magnet']
qb = None
def __init__(self):
super(qBittorrent, self).__init__()
def connect(self):
if self.qb is not None:
return self.qb
url = cleanHost(self.conf('host'), protocol = True, ssl = False)
if self.conf('username') and self.conf('password'):
self.qb = QBittorrentClient(
url,
username = self.conf('username'),
password = self.conf('password')
)
else:
self.qb = QBittorrentClient(url)
return self.qb
def test(self):
if self.connect():
return True
return False
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
log.debug('Sending "%s" to qBittorrent.', (data.get('name')))
if not self.connect():
return False
if not filedata and data.get('protocol') == 'torrent':
log.error('Failed sending torrent, no data')
return False
if data.get('protocol') == 'torrent_magnet':
filedata = self.magnetToTorrent(data.get('url'))
if filedata is False:
return False
data['protocol'] = 'torrent'
info = bdecode(filedata)["info"]
torrent_hash = sha1(bencode(info)).hexdigest()
# Convert base 32 to hex
if len(torrent_hash) == 32:
torrent_hash = b16encode(b32decode(torrent_hash))
# Send request to qBittorrent
try:
self.qb.add_file(filedata)
return self.downloadReturnId(torrent_hash)
except Exception as e:
log.error('Failed to send torrent to qBittorrent: %s', e)
return False
def getTorrentStatus(self, torrent):
if torrent.state in ('uploading', 'queuedUP', 'stalledUP'):
return 'seeding'
if torrent.progress == 1:
return 'completed'
return 'busy'
def getAllDownloadStatus(self, ids):
log.debug('Checking qBittorrent download status.')
if not self.connect():
return []
try:
torrents = self.qb.get_torrents()
release_downloads = ReleaseDownloadList(self)
for torrent in torrents:
if torrent.hash in ids:
torrent.update_general() # get extra info
torrent_filelist = torrent.get_files()
torrent_files = []
torrent_dir = os.path.join(torrent.save_path, torrent.name)
if os.path.isdir(torrent_dir):
torrent.save_path = torrent_dir
if len(torrent_filelist) > 1 and os.path.isdir(torrent_dir): # multi file torrent, path.isdir check makes sure we're not in the root download folder
for root, _, files in os.walk(torrent.save_path):
for f in files:
torrent_files.append(sp(os.path.join(root, f)))
else: # multi or single file placed directly in torrent.save_path
for f in torrent_filelist:
file_path = os.path.join(torrent.save_path, f.name)
if os.path.isfile(file_path):
torrent_files.append(sp(file_path))
release_downloads.append({
'id': torrent.hash,
'name': torrent.name,
'status': self.getTorrentStatus(torrent),
'seed_ratio': torrent.ratio,
'original_status': torrent.state,
'timeleft': torrent.progress * 100 if torrent.progress else -1, # percentage
'folder': sp(torrent.save_path),
'files': torrent_files
})
return release_downloads
except Exception as e:
log.error('Failed to get status from qBittorrent: %s', e)
return []
def pause(self, release_download, pause = True):
if not self.connect():
return False
torrent = self.qb.get_torrent(release_download['id'])
if torrent is None:
return False
if pause:
return torrent.pause()
return torrent.resume()
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
return self.processComplete(release_download, delete_files = True)
def processComplete(self, release_download, delete_files):
log.debug('Requesting qBittorrent to remove the torrent %s%s.',
(release_download['name'], ' and cleanup the downloaded files' if delete_files else ''))
if not self.connect():
return False
torrent = self.qb.find_torrent(release_download['id'])
if torrent is None:
return False
if delete_files:
torrent.delete() # deletes torrent with data
else:
torrent.remove() # just removes the torrent, doesn't delete data
return True
config = [{
'name': 'qbittorrent',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'qbittorrent',
'label': 'qbittorrent',
'description': '',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'http://localhost:8080/',
'description': 'RPC Communication URI. Usually <strong>http://localhost:8080/</strong>'
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'remove_complete',
'label': 'Remove torrent',
'default': False,
'advanced': True,
'type': 'bool',
'description': 'Remove the torrent after it finishes seeding.',
},
{
'name': 'delete_files',
'label': 'Remove files',
'default': True,
'type': 'bool',
'advanced': True,
'description': 'Also remove the leftover files.',
},
{
'name': 'paused',
'type': 'bool',
'advanced': True,
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -0,0 +1,335 @@
from base64 import b16encode, b32decode
from datetime import timedelta
from hashlib import sha1
from urlparse import urlparse
import os
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.encoding import sp
from couchpotato.core.helpers.variable import cleanHost, splitString
from couchpotato.core.logger import CPLog
from bencode import bencode, bdecode
from rtorrent import RTorrent
log = CPLog(__name__)
autoload = 'rTorrent'
class rTorrent(DownloaderBase):
protocol = ['torrent', 'torrent_magnet']
rt = None
error_msg = ''
# Migration url to host options
def __init__(self):
super(rTorrent, self).__init__()
addEvent('app.load', self.migrate)
addEvent('setting.save.rtorrent.*.after', self.settingsChanged)
def migrate(self):
url = self.conf('url')
if url:
host_split = splitString(url.split('://')[-1], split_on = '/')
self.conf('ssl', value = url.startswith('https'))
self.conf('host', value = host_split[0].strip())
self.conf('rpc_url', value = '/'.join(host_split[1:]))
self.deleteConf('url')
def settingsChanged(self):
# Reset active connection if settings have changed
if self.rt:
log.debug('Settings have changed, closing active connection')
self.rt = None
return True
def connect(self, reconnect = False):
# Already connected?
if not reconnect and self.rt is not None:
return self.rt
url = cleanHost(self.conf('host'), protocol = True, ssl = self.conf('ssl'))
# Automatically add '+https' to 'httprpc' protocol if SSL is enabled
if self.conf('ssl') and url.startswith('httprpc://'):
url = url.replace('httprpc://', 'httprpc+https://')
parsed = urlparse(url)
# rpc_url is only used on http/https scgi pass-through
if parsed.scheme in ['http', 'https']:
url += self.conf('rpc_url')
self.rt = RTorrent(
url,
self.conf('username'),
self.conf('password')
)
self.error_msg = ''
try:
self.rt._verify_conn()
except AssertionError as e:
self.error_msg = e.message
self.rt = None
return self.rt
def test(self):
if self.connect(True):
return True
if self.error_msg:
return False, 'Connection failed: ' + self.error_msg
return False
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
log.debug('Sending "%s" to rTorrent.', (data.get('name')))
if not self.connect():
return False
torrent_params = {}
if self.conf('label'):
torrent_params['label'] = self.conf('label')
if not filedata and data.get('protocol') == 'torrent':
log.error('Failed sending torrent, no data')
return False
# Try download magnet torrents
if data.get('protocol') == 'torrent_magnet':
filedata = self.magnetToTorrent(data.get('url'))
if filedata is False:
return False
data['protocol'] = 'torrent'
info = bdecode(filedata)["info"]
torrent_hash = sha1(bencode(info)).hexdigest().upper()
# Convert base 32 to hex
if len(torrent_hash) == 32:
torrent_hash = b16encode(b32decode(torrent_hash))
# Send request to rTorrent
try:
# Send torrent to rTorrent
torrent = self.rt.load_torrent(filedata, verify_retries=10)
if not torrent:
log.error('Unable to find the torrent, did it fail to load?')
return False
# Set label
if self.conf('label'):
torrent.set_custom(1, self.conf('label'))
if self.conf('directory'):
torrent.set_directory(self.conf('directory'))
# Start torrent
if not self.conf('paused', default = 0):
torrent.start()
return self.downloadReturnId(torrent_hash)
except Exception as err:
log.error('Failed to send torrent to rTorrent: %s', err)
return False
def getTorrentStatus(self, torrent):
if not torrent.complete:
return 'busy'
if torrent.open:
return 'seeding'
return 'completed'
def getAllDownloadStatus(self, ids):
log.debug('Checking rTorrent download status.')
if not self.connect():
return []
try:
torrents = self.rt.get_torrents()
release_downloads = ReleaseDownloadList(self)
for torrent in torrents:
if torrent.info_hash in ids:
torrent_directory = os.path.normpath(torrent.directory)
torrent_files = []
for file in torrent.get_files():
if not os.path.normpath(file.path).startswith(torrent_directory):
file_path = os.path.join(torrent_directory, file.path.lstrip('/'))
else:
file_path = file.path
torrent_files.append(sp(file_path))
release_downloads.append({
'id': torrent.info_hash,
'name': torrent.name,
'status': self.getTorrentStatus(torrent),
'seed_ratio': torrent.ratio,
'original_status': torrent.state,
'timeleft': str(timedelta(seconds = float(torrent.left_bytes) / torrent.down_rate)) if torrent.down_rate > 0 else -1,
'folder': sp(torrent.directory),
'files': torrent_files
})
return release_downloads
except Exception as err:
log.error('Failed to get status from rTorrent: %s', err)
return []
def pause(self, release_download, pause = True):
if not self.connect():
return False
torrent = self.rt.find_torrent(release_download['id'])
if torrent is None:
return False
if pause:
return torrent.pause()
return torrent.resume()
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
return self.processComplete(release_download, delete_files = True)
def processComplete(self, release_download, delete_files):
log.debug('Requesting rTorrent to remove the torrent %s%s.',
(release_download['name'], ' and cleanup the downloaded files' if delete_files else ''))
if not self.connect():
return False
torrent = self.rt.find_torrent(release_download['id'])
if torrent is None:
return False
if delete_files:
for file_item in torrent.get_files(): # will only delete files, not dir/sub-dir
os.unlink(os.path.join(torrent.directory, file_item.path))
if torrent.is_multi_file() and torrent.directory.endswith(torrent.name):
# Remove empty directories bottom up
try:
for path, _, _ in os.walk(sp(torrent.directory), topdown = False):
os.rmdir(path)
except OSError:
log.info('Directory "%s" contains extra files, unable to remove', torrent.directory)
torrent.erase() # just removes the torrent, doesn't delete data
return True
config = [{
'name': 'rtorrent',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'rtorrent',
'label': 'rTorrent',
'description': '',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:80',
'description': 'RPC Communication URI. Usually <strong>scgi://localhost:5000</strong>, '
'<strong>httprpc://localhost/rutorrent</strong> or <strong>localhost:80</strong>'
},
{
'name': 'ssl',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Use HyperText Transfer Protocol Secure, or <strong>https</strong>',
},
{
'name': 'rpc_url',
'type': 'string',
'default': 'RPC2',
'advanced': True,
'description': 'Change if your RPC mount is at a different path.',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'label',
'description': 'Label to apply on added torrents.',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Download to this directory. Keep empty for default rTorrent download directory.',
},
{
'name': 'remove_complete',
'label': 'Remove torrent',
'default': False,
'advanced': True,
'type': 'bool',
'description': 'Remove the torrent after it finishes seeding.',
},
{
'name': 'delete_files',
'label': 'Remove files',
'default': True,
'type': 'bool',
'advanced': True,
'description': 'Also remove the leftover files.',
},
{
'name': 'paused',
'type': 'bool',
'advanced': True,
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -0,0 +1,281 @@
from datetime import timedelta
from urllib2 import URLError
import json
import os
import traceback
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.helpers.encoding import tryUrlencode, ss, sp
from couchpotato.core.helpers.variable import cleanHost, mergeDicts
from couchpotato.core.logger import CPLog
from couchpotato.environment import Env
log = CPLog(__name__)
autoload = 'Sabnzbd'
class Sabnzbd(DownloaderBase):
protocol = ['nzb']
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
log.info('Sending "%s" to SABnzbd.', data.get('name'))
req_params = {
'cat': self.conf('category'),
'mode': 'addurl',
'nzbname': self.createNzbName(data, media),
'priority': self.conf('priority'),
}
nzb_filename = None
if filedata:
if len(filedata) < 50:
log.error('No proper nzb available: %s', filedata)
return False
# If it's a .rar, it adds the .rar extension, otherwise it stays .nzb
nzb_filename = self.createFileName(data, filedata, media)
req_params['mode'] = 'addfile'
else:
req_params['name'] = data.get('url')
try:
if nzb_filename and req_params.get('mode') is 'addfile':
sab_data = self.call(req_params, files = {'nzbfile': (ss(nzb_filename), filedata)})
else:
sab_data = self.call(req_params)
except URLError:
log.error('Failed sending release, probably wrong HOST: %s', traceback.format_exc(0))
return False
except:
log.error('Failed sending release, use API key, NOT the NZB key: %s', traceback.format_exc(0))
return False
log.debug('Result from SAB: %s', sab_data)
if sab_data.get('status') and not sab_data.get('error'):
log.info('NZB sent to SAB successfully.')
if filedata:
return self.downloadReturnId(sab_data.get('nzo_ids')[0])
else:
return True
else:
log.error('Error getting data from SABNZBd: %s', sab_data)
return False
def test(self):
try:
sab_data = self.call({
'mode': 'version',
})
v = sab_data.split('.')
if int(v[0]) == 0 and int(v[1]) < 7:
return False, 'Your Sabnzbd client is too old, please update to newest version.'
# the version check will work even with wrong api key, so we need the next check as well
sab_data = self.call({
'mode': 'qstatus',
})
if not sab_data:
return False
except:
return False
return True
def getAllDownloadStatus(self, ids):
log.debug('Checking SABnzbd download status.')
# Go through Queue
try:
queue = self.call({
'mode': 'queue',
})
except:
log.error('Failed getting queue: %s', traceback.format_exc(1))
return []
# Go through history items
try:
history = self.call({
'mode': 'history',
'limit': 15,
})
except:
log.error('Failed getting history json: %s', traceback.format_exc(1))
return []
release_downloads = ReleaseDownloadList(self)
# Get busy releases
for nzb in queue.get('slots', []):
if nzb['nzo_id'] in ids:
status = 'busy'
if 'ENCRYPTED / ' in nzb['filename']:
status = 'failed'
release_downloads.append({
'id': nzb['nzo_id'],
'name': nzb['filename'],
'status': status,
'original_status': nzb['status'],
'timeleft': nzb['timeleft'] if not queue['paused'] else -1,
})
# Get old releases
for nzb in history.get('slots', []):
if nzb['nzo_id'] in ids:
status = 'busy'
if nzb['status'] == 'Failed' or (nzb['status'] == 'Completed' and nzb['fail_message'].strip()):
status = 'failed'
elif nzb['status'] == 'Completed':
status = 'completed'
release_downloads.append({
'id': nzb['nzo_id'],
'name': nzb['name'],
'status': status,
'original_status': nzb['status'],
'timeleft': str(timedelta(seconds = 0)),
'folder': sp(os.path.dirname(nzb['storage']) if os.path.isfile(nzb['storage']) else nzb['storage']),
})
return release_downloads
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
try:
self.call({
'mode': 'queue',
'name': 'delete',
'del_files': '1',
'value': release_download['id']
}, use_json = False)
self.call({
'mode': 'history',
'name': 'delete',
'del_files': '1',
'value': release_download['id']
}, use_json = False)
except:
log.error('Failed deleting: %s', traceback.format_exc(0))
return False
return True
def processComplete(self, release_download, delete_files = False):
log.debug('Requesting SabNZBd to remove the NZB %s.', release_download['name'])
try:
self.call({
'mode': 'history',
'name': 'delete',
'del_files': '0',
'value': release_download['id']
}, use_json = False)
except:
log.error('Failed removing: %s', traceback.format_exc(0))
return False
return True
def call(self, request_params, use_json = True, **kwargs):
url = cleanHost(self.conf('host'), ssl = self.conf('ssl')) + 'api?' + tryUrlencode(mergeDicts(request_params, {
'apikey': self.conf('api_key'),
'output': 'json'
}))
data = self.urlopen(url, timeout = 60, show_error = False, headers = {'User-Agent': Env.getIdentifier()}, **kwargs)
if use_json:
d = json.loads(data)
if d.get('error'):
log.error('Error getting data from SABNZBd: %s', d.get('error'))
return {}
return d.get(request_params['mode']) or d
else:
return data
config = [{
'name': 'sabnzbd',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'sabnzbd',
'label': 'Sabnzbd',
'description': 'Use <a href="http://sabnzbd.org/" target="_blank">SABnzbd</a> (0.7+) to download NZBs.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'localhost:8080',
},
{
'name': 'ssl',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Use HyperText Transfer Protocol Secure, or <strong>https</strong>',
},
{
'name': 'api_key',
'label': 'Api Key',
'description': 'Used for all calls to Sabnzbd.',
},
{
'name': 'category',
'label': 'Category',
'description': 'The category CP places the nzb in. Like <strong>movies</strong> or <strong>couchpotato</strong>',
},
{
'name': 'priority',
'label': 'Priority',
'type': 'dropdown',
'default': '0',
'advanced': True,
'values': [('Paused', -2), ('Low', -1), ('Normal', 0), ('High', 1), ('Forced', 2)],
'description': 'Add to the queue with this priority.',
},
{
'name': 'manual',
'default': False,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'remove_complete',
'advanced': True,
'label': 'Remove NZB',
'default': False,
'type': 'bool',
'description': 'Remove the NZB from history after it completed.',
},
{
'name': 'delete_failed',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -1,53 +0,0 @@
from .main import Sabnzbd
def start():
return Sabnzbd()
config = [{
'name': 'sabnzbd',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'sabnzbd',
'label': 'Sabnzbd',
'description': 'Use <a href="http://sabnzbd.org/" target="_blank">SABnzbd</a> to download NZBs.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb',
},
{
'name': 'host',
'default': 'localhost:8080',
},
{
'name': 'api_key',
'label': 'Api Key',
'description': 'Used for all calls to Sabnzbd.',
},
{
'name': 'category',
'label': 'Category',
'description': 'The category CP places the nzb in. Like <strong>movies</strong> or <strong>couchpotato</strong>',
},
{
'name': 'manual',
'default': False,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -1,152 +0,0 @@
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import tryUrlencode, ss
from couchpotato.core.helpers.variable import cleanHost, mergeDicts
from couchpotato.core.logger import CPLog
from urllib2 import URLError
import json
import traceback
log = CPLog(__name__)
class Sabnzbd(Downloader):
type = ['nzb']
def download(self, data = {}, movie = {}, filedata = None):
log.info('Sending "%s" to SABnzbd.', data.get('name'))
params = {
'apikey': self.conf('api_key'),
'cat': self.conf('category'),
'mode': 'addurl',
'nzbname': self.createNzbName(data, movie),
}
if filedata:
if len(filedata) < 50:
log.error('No proper nzb available: %s', (filedata))
return False
# If it's a .rar, it adds the .rar extension, otherwise it stays .nzb
nzb_filename = self.createFileName(data, filedata, movie)
params['mode'] = 'addfile'
else:
params['name'] = data.get('url')
url = cleanHost(self.conf('host')) + 'api?' + tryUrlencode(params)
try:
if params.get('mode') is 'addfile':
sab = self.urlopen(url, timeout = 60, params = {'nzbfile': (ss(nzb_filename), filedata)}, multipart = True, show_error = False)
else:
sab = self.urlopen(url, timeout = 60, show_error = False)
except URLError:
log.error('Failed sending release, probably wrong HOST: %s', traceback.format_exc(0))
return False
except:
log.error('Failed sending release, use API key, NOT the NZB key: %s', traceback.format_exc(0))
return False
result = sab.strip()
if not result:
log.error('SABnzbd didn\'t return anything.')
return False
log.debug('Result text from SAB: %s', result[:40])
if result[:2] == 'ok':
log.info('NZB sent to SAB successfully.')
return True
else:
log.error(result[:40])
return False
def getAllDownloadStatus(self):
log.debug('Checking SABnzbd download status.')
# Go through Queue
try:
queue = self.call({
'mode': 'queue',
})
except:
log.error('Failed getting queue: %s', traceback.format_exc(1))
return False
# Go through history items
try:
history = self.call({
'mode': 'history',
'limit': 15,
})
except:
log.error('Failed getting history json: %s', traceback.format_exc(1))
return False
statuses = []
# Get busy releases
for item in queue.get('slots', []):
statuses.append({
'id': item['nzo_id'],
'name': item['filename'],
'status': 'busy',
'original_status': item['status'],
'timeleft': item['timeleft'] if not queue['paused'] else -1,
})
# Get old releases
for item in history.get('slots', []):
status = 'busy'
if item['status'] == 'Failed' or (item['status'] == 'Completed' and item['fail_message'].strip()):
status = 'failed'
elif item['status'] == 'Completed':
status = 'completed'
statuses.append({
'id': item['nzo_id'],
'name': item['name'],
'status': status,
'original_status': item['status'],
'timeleft': 0,
})
return statuses
def removeFailed(self, item):
log.info('%s failed downloading, deleting...', item['name'])
try:
self.call({
'mode': 'history',
'name': 'delete',
'del_files': '1',
'value': item['id']
}, use_json = False)
except:
log.error('Failed deleting: %s', traceback.format_exc(0))
return False
return True
def call(self, params, use_json = True):
url = cleanHost(self.conf('host')) + 'api?' + tryUrlencode(mergeDicts(params, {
'apikey': self.conf('api_key'),
'output': 'json'
}))
data = self.urlopen(url, timeout = 60, show_error = False)
if use_json:
d = json.loads(data)
if d.get('error'):
log.error('Error getting data from SABNZBd: %s', d.get('error'))
return {}
return d[params['mode']]
else:
return data

View File

@@ -0,0 +1,226 @@
import json
import traceback
from couchpotato.core._base.downloader.main import DownloaderBase
from couchpotato.core.helpers.encoding import isInt
from couchpotato.core.helpers.variable import cleanHost
from couchpotato.core.logger import CPLog
import requests
log = CPLog(__name__)
autoload = 'Synology'
class Synology(DownloaderBase):
protocol = ['nzb', 'torrent', 'torrent_magnet']
status_support = False
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
response = False
log.error('Sending "%s" (%s) to Synology.', (data['name'], data['protocol']))
# Load host from config and split out port.
host = cleanHost(self.conf('host'), protocol = False).split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
try:
# Send request to Synology
srpc = SynologyRPC(host[0], host[1], self.conf('username'), self.conf('password'), self.conf('destination'))
if data['protocol'] == 'torrent_magnet':
log.info('Adding torrent URL %s', data['url'])
response = srpc.create_task(url = data['url'])
elif data['protocol'] in ['nzb', 'torrent']:
log.info('Adding %s' % data['protocol'])
if not filedata:
log.error('No %s data found', data['protocol'])
else:
filename = data['name'] + '.' + data['protocol']
response = srpc.create_task(filename = filename, filedata = filedata)
except:
log.error('Exception while adding torrent: %s', traceback.format_exc())
finally:
return self.downloadReturnId('') if response else False
def test(self):
host = cleanHost(self.conf('host'), protocol = False).split(':')
try:
srpc = SynologyRPC(host[0], host[1], self.conf('username'), self.conf('password'))
test_result = srpc.test()
except:
return False
return test_result
def getEnabledProtocol(self):
if self.conf('use_for') == 'both':
return super(Synology, self).getEnabledProtocol()
elif self.conf('use_for') == 'torrent':
return ['torrent', 'torrent_magnet']
else:
return ['nzb']
def isEnabled(self, manual = False, data = None):
if not data: data = {}
for_protocol = ['both']
if data and 'torrent' in data.get('protocol'):
for_protocol.append('torrent')
elif data:
for_protocol.append(data.get('protocol'))
return super(Synology, self).isEnabled(manual, data) and\
((self.conf('use_for') in for_protocol))
class SynologyRPC(object):
"""SynologyRPC lite library"""
def __init__(self, host = 'localhost', port = 5000, username = None, password = None, destination = None):
super(SynologyRPC, self).__init__()
self.download_url = 'http://%s:%s/webapi/DownloadStation/task.cgi' % (host, port)
self.auth_url = 'http://%s:%s/webapi/auth.cgi' % (host, port)
self.sid = None
self.username = username
self.password = password
self.destination = destination
self.session_name = 'DownloadStation'
def _login(self):
if self.username and self.password:
args = {'api': 'SYNO.API.Auth', 'account': self.username, 'passwd': self.password, 'version': 2,
'method': 'login', 'session': self.session_name, 'format': 'sid'}
response = self._req(self.auth_url, args)
if response['success']:
self.sid = response['data']['sid']
log.debug('sid=%s', self.sid)
else:
log.error('Couldn\'t login to Synology, %s', response)
return response['success']
else:
log.error('User or password missing, not using authentication.')
return False
def _logout(self):
args = {'api':'SYNO.API.Auth', 'version':1, 'method':'logout', 'session':self.session_name, '_sid':self.sid}
return self._req(self.auth_url, args)
def _req(self, url, args, files = None):
response = {'success': False}
try:
req = requests.post(url, data = args, files = files)
req.raise_for_status()
response = json.loads(req.text)
if response['success']:
log.info('Synology action successfull')
return response
except requests.ConnectionError as err:
log.error('Synology connection error, check your config %s', err)
except requests.HTTPError as err:
log.error('SynologyRPC HTTPError: %s', err)
except Exception as err:
log.error('Exception: %s', err)
finally:
return response
def create_task(self, url = None, filename = None, filedata = None):
""" Creates new download task in Synology DownloadStation. Either specify
url or pair (filename, filedata).
Returns True if task was created, False otherwise
"""
result = False
# login
if self._login():
args = {'api': 'SYNO.DownloadStation.Task',
'version': '1',
'method': 'create',
'_sid': self.sid}
if self.destination and len(self.destination) > 0:
args['destination'] = self.destination
if url:
log.info('Login success, adding torrent URI')
args['uri'] = url
response = self._req(self.download_url, args = args)
log.info('Response: %s', response)
result = response['success']
elif filename and filedata:
log.info('Login success, adding torrent')
files = {'file': (filename, filedata)}
response = self._req(self.download_url, args = args, files = files)
log.info('Response: %s', response)
result = response['success']
else:
log.error('Invalid use of SynologyRPC.create_task: either url or filename+filedata must be specified')
self._logout()
return result
def test(self):
return bool(self._login())
config = [{
'name': 'synology',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'synology',
'label': 'Synology',
'description': 'Use <a href="http://www.synology.com/dsm/home_home_applications_download_station.php" target="_blank">Synology Download Station</a> to download.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'nzb,torrent',
},
{
'name': 'host',
'default': 'localhost:5000',
'description': 'Hostname with port. Usually <strong>localhost:5000</strong>',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'destination',
'description': 'Specify <strong>existing</strong> destination share to where your files will be downloaded, usually <strong>Downloads</strong>',
'advanced': True,
},
{
'name': 'use_for',
'label': 'Use for',
'default': 'both',
'type': 'dropdown',
'values': [('usenet & torrents', 'both'), ('usenet', 'nzb'), ('torrent', 'torrent')],
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,45 +0,0 @@
from .main import Synology
def start():
return Synology()
config = [{
'name': 'synology',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'synology',
'label': 'Synology',
'description': 'Use <a href="http://www.synology.com/dsm/home_home_applications_download_station.php" target="_blank">Synology Download Station</a> to download.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:5000',
'description': 'Hostname with port. Usually <strong>localhost:5000</strong>',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,105 +0,0 @@
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import isInt
from couchpotato.core.logger import CPLog
import httplib
import json
import urllib
import urllib2
log = CPLog(__name__)
class Synology(Downloader):
type = ['torrent_magnet']
log = CPLog(__name__)
def download(self, data, movie, filedata = None):
log.error('Sending "%s" (%s) to Synology.', (data.get('name'), data.get('type')))
# Load host from config and split out port.
host = self.conf('host').split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
if data.get('type') == 'torrent':
log.error('Can\'t add binary torrent file')
return False
try:
# Send request to Transmission
srpc = SynologyRPC(host[0], host[1], self.conf('username'), self.conf('password'))
remote_torrent = srpc.add_torrent_uri(data.get('url'))
log.info('Response: %s', remote_torrent)
return remote_torrent['success']
except Exception, err:
log.error('Exception while adding torrent: %s', err)
return False
class SynologyRPC(object):
'''SynologyRPC lite library'''
def __init__(self, host = 'localhost', port = 5000, username = None, password = None):
super(SynologyRPC, self).__init__()
self.download_url = 'http://%s:%s/webapi/DownloadStation/task.cgi' % (host, port)
self.auth_url = 'http://%s:%s/webapi/auth.cgi' % (host, port)
self.username = username
self.password = password
self.session_name = 'DownloadStation'
def _login(self):
if self.username and self.password:
args = {'api': 'SYNO.API.Auth', 'account': self.username, 'passwd': self.password, 'version': 2,
'method': 'login', 'session': self.session_name, 'format': 'sid'}
response = self._req(self.auth_url, args)
if response['success'] == True:
self.sid = response['data']['sid']
log.debug('Sid=%s', self.sid)
return response
elif self.username or self.password:
log.error('User or password missing, not using authentication.')
return False
def _logout(self):
args = {'api':'SYNO.API.Auth', 'version':1, 'method':'logout', 'session':self.session_name, '_sid':self.sid}
return self._req(self.auth_url, args)
def _req(self, url, args):
req_url = url + '?' + urllib.urlencode(args)
try:
req_open = urllib2.urlopen(req_url)
response = json.loads(req_open.read())
if response['success'] == True:
log.info('Synology action successfull')
return response
except httplib.InvalidURL, err:
log.error('Invalid Transmission host, check your config %s', err)
return False
except urllib2.HTTPError, err:
log.error('SynologyRPC HTTPError: %s', err)
return False
except urllib2.URLError, err:
log.error('Unable to connect to Synology %s', err)
return False
def add_torrent_uri(self, torrent):
log.info('Adding torrent URL %s', torrent)
response = {}
# login
login = self._login()
if len(login) > 0 and login['success'] == True:
log.info('Login success, adding torrent')
args = {'api':'SYNO.DownloadStation.Task', 'version':1, 'method':'create', 'uri':torrent, '_sid':self.sid}
response = self._req(self.download_url, args)
self._logout()
else:
log.error('Couldn\'t login to Synology, %s', login)
return response

View File

@@ -0,0 +1,348 @@
from base64 import b64encode
from datetime import timedelta
import httplib
import json
import os.path
import re
import urllib2
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.helpers.encoding import isInt, sp
from couchpotato.core.helpers.variable import tryInt, tryFloat, cleanHost
from couchpotato.core.logger import CPLog
log = CPLog(__name__)
autoload = 'Transmission'
class Transmission(DownloaderBase):
protocol = ['torrent', 'torrent_magnet']
log = CPLog(__name__)
trpc = None
def connect(self, reconnect = False):
# Load host from config and split out port.
host = cleanHost(self.conf('host'), protocol = False).split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
if not self.trpc or reconnect:
self.trpc = TransmissionRPC(host[0], port = host[1], rpc_url = self.conf('rpc_url').strip('/ '), username = self.conf('username'), password = self.conf('password'))
return self.trpc
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
log.info('Sending "%s" (%s) to Transmission.', (data.get('name'), data.get('protocol')))
if not self.connect():
return False
if not filedata and data.get('protocol') == 'torrent':
log.error('Failed sending torrent, no data')
return False
# Set parameters for adding torrent
params = {
'paused': self.conf('paused', default = False)
}
if self.conf('directory'):
if os.path.isdir(self.conf('directory')):
params['download-dir'] = self.conf('directory')
else:
log.error('Download directory from Transmission settings: %s doesn\'t exist', self.conf('directory'))
# Change parameters of torrent
torrent_params = {}
if data.get('seed_ratio'):
torrent_params['seedRatioLimit'] = tryFloat(data.get('seed_ratio'))
torrent_params['seedRatioMode'] = 1
if data.get('seed_time'):
torrent_params['seedIdleLimit'] = tryInt(data.get('seed_time')) * 60
torrent_params['seedIdleMode'] = 1
# Send request to Transmission
if data.get('protocol') == 'torrent_magnet':
remote_torrent = self.trpc.add_torrent_uri(data.get('url'), arguments = params)
torrent_params['trackerAdd'] = self.torrent_trackers
else:
remote_torrent = self.trpc.add_torrent_file(b64encode(filedata), arguments = params)
if not remote_torrent:
log.error('Failed sending torrent to Transmission')
return False
# Change settings of added torrents
if torrent_params:
self.trpc.set_torrent(remote_torrent['torrent-added']['hashString'], torrent_params)
log.info('Torrent sent to Transmission successfully.')
return self.downloadReturnId(remote_torrent['torrent-added']['hashString'])
def test(self):
if self.connect(True) and self.trpc.get_session():
return True
return False
def getAllDownloadStatus(self, ids):
log.debug('Checking Transmission download status.')
if not self.connect():
return []
release_downloads = ReleaseDownloadList(self)
return_params = {
'fields': ['id', 'name', 'hashString', 'percentDone', 'status', 'eta', 'isStalled', 'isFinished', 'downloadDir', 'uploadRatio', 'secondsSeeding', 'seedIdleLimit', 'files']
}
session = self.trpc.get_session()
queue = self.trpc.get_alltorrents(return_params)
if not (queue and queue.get('torrents')):
log.debug('Nothing in queue or error')
return []
for torrent in queue['torrents']:
if torrent['hashString'] in ids:
log.debug('name=%s / id=%s / downloadDir=%s / hashString=%s / percentDone=%s / status=%s / isStalled=%s / eta=%s / uploadRatio=%s / isFinished=%s / incomplete-dir-enabled=%s / incomplete-dir=%s',
(torrent['name'], torrent['id'], torrent['downloadDir'], torrent['hashString'], torrent['percentDone'], torrent['status'], torrent.get('isStalled', 'N/A'), torrent['eta'], torrent['uploadRatio'], torrent['isFinished'], session['incomplete-dir-enabled'], session['incomplete-dir']))
status = 'busy'
if torrent.get('isStalled') and not torrent['percentDone'] == 1 and self.conf('stalled_as_failed'):
status = 'failed'
elif torrent['status'] == 0 and torrent['percentDone'] == 1:
status = 'completed'
elif torrent['status'] in [5, 6]:
status = 'seeding'
if session['incomplete-dir-enabled'] and status == 'busy':
torrent_folder = session['incomplete-dir']
else:
torrent_folder = torrent['downloadDir']
torrent_files = []
for file_item in torrent['files']:
torrent_files.append(sp(os.path.join(torrent_folder, file_item['name'])))
release_downloads.append({
'id': torrent['hashString'],
'name': torrent['name'],
'status': status,
'original_status': torrent['status'],
'seed_ratio': torrent['uploadRatio'],
'timeleft': str(timedelta(seconds = torrent['eta'])),
'folder': sp(torrent_folder if len(torrent_files) == 1 else os.path.join(torrent_folder, torrent['name'])),
'files': torrent_files
})
return release_downloads
def pause(self, release_download, pause = True):
if pause:
return self.trpc.stop_torrent(release_download['id'])
else:
return self.trpc.start_torrent(release_download['id'])
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
return self.trpc.remove_torrent(release_download['id'], True)
def processComplete(self, release_download, delete_files = False):
log.debug('Requesting Transmission to remove the torrent %s%s.', (release_download['name'], ' and cleanup the downloaded files' if delete_files else ''))
return self.trpc.remove_torrent(release_download['id'], delete_files)
class TransmissionRPC(object):
"""TransmissionRPC lite library"""
def __init__(self, host = 'localhost', port = 9091, rpc_url = 'transmission', username = None, password = None):
super(TransmissionRPC, self).__init__()
self.url = 'http://' + host + ':' + str(port) + '/' + rpc_url + '/rpc'
self.tag = 0
self.session_id = 0
self.session = {}
if username and password:
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(realm = 'Transmission', uri = self.url, user = username, passwd = password)
opener = urllib2.build_opener(urllib2.HTTPBasicAuthHandler(password_manager))
opener.addheaders = [('User-agent', 'couchpotato-transmission-client/1.0')]
urllib2.install_opener(opener)
elif username or password:
log.debug('User or password missing, not using authentication.')
self.session = self.get_session()
def _request(self, ojson):
self.tag += 1
headers = {'x-transmission-session-id': str(self.session_id)}
request = urllib2.Request(self.url, json.dumps(ojson).encode('utf-8'), headers)
try:
open_request = urllib2.urlopen(request)
response = json.loads(open_request.read())
log.debug('request: %s', json.dumps(ojson))
log.debug('response: %s', json.dumps(response))
if response['result'] == 'success':
log.debug('Transmission action successful')
return response['arguments']
else:
log.debug('Unknown failure sending command to Transmission. Return text is: %s', response['result'])
return False
except httplib.InvalidURL as err:
log.error('Invalid Transmission host, check your config %s', err)
return False
except urllib2.HTTPError as err:
if err.code == 401:
log.error('Invalid Transmission Username or Password, check your config')
return False
elif err.code == 409:
msg = str(err.read())
try:
self.session_id = \
re.search('X-Transmission-Session-Id:\s*(\w+)', msg).group(1)
log.debug('X-Transmission-Session-Id: %s', self.session_id)
# #resend request with the updated header
return self._request(ojson)
except:
log.error('Unable to get Transmission Session-Id %s', err)
else:
log.error('TransmissionRPC HTTPError: %s', err)
except urllib2.URLError as err:
log.error('Unable to connect to Transmission %s', err)
def get_session(self):
post_data = {'method': 'session-get', 'tag': self.tag}
return self._request(post_data)
def add_torrent_uri(self, torrent, arguments):
arguments['filename'] = torrent
post_data = {'arguments': arguments, 'method': 'torrent-add', 'tag': self.tag}
return self._request(post_data)
def add_torrent_file(self, torrent, arguments):
arguments['metainfo'] = torrent
post_data = {'arguments': arguments, 'method': 'torrent-add', 'tag': self.tag}
return self._request(post_data)
def set_torrent(self, torrent_id, arguments):
arguments['ids'] = torrent_id
post_data = {'arguments': arguments, 'method': 'torrent-set', 'tag': self.tag}
return self._request(post_data)
def get_alltorrents(self, arguments):
post_data = {'arguments': arguments, 'method': 'torrent-get', 'tag': self.tag}
return self._request(post_data)
def stop_torrent(self, torrent_id):
post_data = {'arguments': {'ids': torrent_id}, 'method': 'torrent-stop', 'tag': self.tag}
return self._request(post_data)
def start_torrent(self, torrent_id):
post_data = {'arguments': {'ids': torrent_id}, 'method': 'torrent-start', 'tag': self.tag}
return self._request(post_data)
def remove_torrent(self, torrent_id, delete_local_data):
post_data = {'arguments': {'ids': torrent_id, 'delete-local-data': delete_local_data}, 'method': 'torrent-remove', 'tag': self.tag}
return self._request(post_data)
config = [{
'name': 'transmission',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'transmission',
'label': 'Transmission',
'description': 'Use <a href="http://www.transmissionbt.com/" target="_blank">Transmission</a> to download torrents.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:9091',
'description': 'Hostname with port. Usually <strong>localhost:9091</strong>',
},
{
'name': 'rpc_url',
'type': 'string',
'default': 'transmission',
'advanced': True,
'description': 'Change if you don\'t run Transmission RPC at the default url.',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Download to this directory. Keep empty for default Transmission download directory.',
},
{
'name': 'remove_complete',
'label': 'Remove torrent',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Remove the torrent from Transmission after it finished seeding.',
},
{
'name': 'delete_files',
'label': 'Remove files',
'default': True,
'type': 'bool',
'advanced': True,
'description': 'Also remove the leftover files.',
},
{
'name': 'paused',
'type': 'bool',
'advanced': True,
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'stalled_as_failed',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Consider a stalled torrent as failed',
},
{
'name': 'delete_failed',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -1,63 +0,0 @@
from .main import Transmission
def start():
return Transmission()
config = [{
'name': 'transmission',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'transmission',
'label': 'Transmission',
'description': 'Use <a href="http://www.transmissionbt.com/" target="_blank">Transmission</a> to download torrents.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:9091',
'description': 'Hostname with port. Usually <strong>localhost:9091</strong>',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'paused',
'type': 'bool',
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'directory',
'type': 'directory',
'description': 'Where should Transmission saved the downloaded files?',
},
{
'name': 'ratio',
'default': 10,
'type': 'int',
'advanced': True,
'description': 'Stop transfer when reaching ratio',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,148 +0,0 @@
from base64 import b64encode
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import isInt
from couchpotato.core.logger import CPLog
import httplib
import json
import os.path
import re
import urllib2
log = CPLog(__name__)
class Transmission(Downloader):
type = ['torrent', 'torrent_magnet']
log = CPLog(__name__)
def download(self, data, movie, filedata = None):
log.debug('Sending "%s" (%s) to Transmission.', (data.get('name'), data.get('type')))
# Load host from config and split out port.
host = self.conf('host').split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
# Set parameters for Transmission
folder_name = self.createFileName(data, filedata, movie)[:-len(data.get('type')) - 1]
folder_path = os.path.join(self.conf('directory', default = ''), folder_name).rstrip(os.path.sep)
# Create the empty folder to download too
self.makeDir(folder_path)
params = {
'paused': self.conf('paused', default = 0),
'download-dir': folder_path
}
torrent_params = {}
if self.conf('ratio'):
torrent_params = {
'seedRatioLimit': self.conf('ratio'),
'seedRatioMode': self.conf('ratio')
}
if not filedata and data.get('type') == 'torrent':
log.error('Failed sending torrent, no data')
return False
# Send request to Transmission
try:
trpc = TransmissionRPC(host[0], port = host[1], username = self.conf('username'), password = self.conf('password'))
if data.get('type') == 'torrent_magnet':
remote_torrent = trpc.add_torrent_uri(data.get('url'), arguments = params)
torrent_params['trackerAdd'] = self.torrent_trackers
else:
remote_torrent = trpc.add_torrent_file(b64encode(filedata), arguments = params)
# Change settings of added torrents
if torrent_params:
trpc.set_torrent(remote_torrent['torrent-added']['hashString'], torrent_params)
return True
except Exception, err:
log.error('Failed to change settings for transfer: %s', err)
return False
class TransmissionRPC(object):
"""TransmissionRPC lite library"""
def __init__(self, host = 'localhost', port = 9091, username = None, password = None):
super(TransmissionRPC, self).__init__()
self.url = 'http://' + host + ':' + str(port) + '/transmission/rpc'
self.tag = 0
self.session_id = 0
self.session = {}
if username and password:
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(realm = None, uri = self.url, user = username, passwd = password)
opener = urllib2.build_opener(urllib2.HTTPBasicAuthHandler(password_manager), urllib2.HTTPDigestAuthHandler(password_manager))
opener.addheaders = [('User-agent', 'couchpotato-transmission-client/1.0')]
urllib2.install_opener(opener)
elif username or password:
log.debug('User or password missing, not using authentication.')
self.session = self.get_session()
def _request(self, ojson):
self.tag += 1
headers = {'x-transmission-session-id': str(self.session_id)}
request = urllib2.Request(self.url, json.dumps(ojson).encode('utf-8'), headers)
try:
open_request = urllib2.urlopen(request)
response = json.loads(open_request.read())
log.debug('response: %s', json.dumps(response))
if response['result'] == 'success':
log.debug('Transmission action successfull')
return response['arguments']
else:
log.debug('Unknown failure sending command to Transmission. Return text is: %s', response['result'])
return False
except httplib.InvalidURL, err:
log.error('Invalid Transmission host, check your config %s', err)
return False
except urllib2.HTTPError, err:
if err.code == 401:
log.error('Invalid Transmission Username or Password, check your config')
return False
elif err.code == 409:
msg = str(err.read())
try:
self.session_id = \
re.search('X-Transmission-Session-Id:\s*(\w+)', msg).group(1)
log.debug('X-Transmission-Session-Id: %s', self.session_id)
# #resend request with the updated header
return self._request(ojson)
except:
log.error('Unable to get Transmission Session-Id %s', err)
else:
log.error('TransmissionRPC HTTPError: %s', err)
except urllib2.URLError, err:
log.error('Unable to connect to Transmission %s', err)
def get_session(self):
post_data = {'method': 'session-get', 'tag': self.tag}
return self._request(post_data)
def add_torrent_uri(self, torrent, arguments):
arguments['filename'] = torrent
post_data = {'arguments': arguments, 'method': 'torrent-add', 'tag': self.tag}
return self._request(post_data)
def add_torrent_file(self, torrent, arguments):
arguments['metainfo'] = torrent
post_data = {'arguments': arguments, 'method': 'torrent-add', 'tag': self.tag}
return self._request(post_data)
def set_torrent(self, torrent_id, arguments):
arguments['ids'] = torrent_id
post_data = {'arguments': arguments, 'method': 'torrent-set', 'tag': self.tag}
return self._request(post_data)

View File

@@ -0,0 +1,421 @@
from base64 import b16encode, b32decode
from datetime import timedelta
from hashlib import sha1
import cookielib
import httplib
import json
import os
import re
import stat
import time
import urllib
import urllib2
from bencode import bencode as benc, bdecode
from couchpotato.core._base.downloader.main import DownloaderBase, ReleaseDownloadList
from couchpotato.core.helpers.encoding import isInt, ss, sp
from couchpotato.core.helpers.variable import tryInt, tryFloat, cleanHost
from couchpotato.core.logger import CPLog
from multipartpost import MultipartPostHandler
log = CPLog(__name__)
autoload = 'uTorrent'
class uTorrent(DownloaderBase):
protocol = ['torrent', 'torrent_magnet']
utorrent_api = None
status_flags = {
'STARTED': 1,
'CHECKING': 2,
'CHECK-START': 4,
'CHECKED': 8,
'ERROR': 16,
'PAUSED': 32,
'QUEUED': 64,
'LOADED': 128
}
def connect(self):
# Load host from config and split out port.
host = cleanHost(self.conf('host'), protocol = False).split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
self.utorrent_api = uTorrentAPI(host[0], port = host[1], username = self.conf('username'), password = self.conf('password'))
return self.utorrent_api
def download(self, data = None, media = None, filedata = None):
if not media: media = {}
if not data: data = {}
log.debug("Sending '%s' (%s) to uTorrent.", (data.get('name'), data.get('protocol')))
if not self.connect():
return False
settings = self.utorrent_api.get_settings()
if not settings:
return False
#Fix settings in case they are not set for CPS compatibility
new_settings = {}
if not (settings.get('seed_prio_limitul') == 0 and settings['seed_prio_limitul_flag']):
new_settings['seed_prio_limitul'] = 0
new_settings['seed_prio_limitul_flag'] = True
log.info('Updated uTorrent settings to set a torrent to complete after it the seeding requirements are met.')
if settings.get('bt.read_only_on_complete'): #This doesn't work as this option seems to be not available through the api. Mitigated with removeReadOnly function
new_settings['bt.read_only_on_complete'] = False
log.info('Updated uTorrent settings to not set the files to read only after completing.')
if new_settings:
self.utorrent_api.set_settings(new_settings)
torrent_params = {}
if self.conf('label'):
torrent_params['label'] = self.conf('label')
if not filedata and data.get('protocol') == 'torrent':
log.error('Failed sending torrent, no data')
return False
if data.get('protocol') == 'torrent_magnet':
torrent_hash = re.findall('urn:btih:([\w]{32,40})', data.get('url'))[0].upper()
torrent_params['trackers'] = '%0D%0A%0D%0A'.join(self.torrent_trackers)
else:
info = bdecode(filedata)['info']
torrent_hash = sha1(benc(info)).hexdigest().upper()
torrent_filename = self.createFileName(data, filedata, media)
if data.get('seed_ratio'):
torrent_params['seed_override'] = 1
torrent_params['seed_ratio'] = tryInt(tryFloat(data['seed_ratio']) * 1000)
if data.get('seed_time'):
torrent_params['seed_override'] = 1
torrent_params['seed_time'] = tryInt(data['seed_time']) * 3600
# Convert base 32 to hex
if len(torrent_hash) == 32:
torrent_hash = b16encode(b32decode(torrent_hash))
# Send request to uTorrent
if data.get('protocol') == 'torrent_magnet':
self.utorrent_api.add_torrent_uri(torrent_filename, data.get('url'))
else:
self.utorrent_api.add_torrent_file(torrent_filename, filedata)
# Change settings of added torrent
self.utorrent_api.set_torrent(torrent_hash, torrent_params)
if self.conf('paused', default = 0):
self.utorrent_api.pause_torrent(torrent_hash)
return self.downloadReturnId(torrent_hash)
def test(self):
if self.connect():
build_version = self.utorrent_api.get_build()
if not build_version:
return False
if build_version < 25406: # This build corresponds to version 3.0.0 stable
return False, 'Your uTorrent client is too old, please update to newest version.'
return True
return False
def getAllDownloadStatus(self, ids):
log.debug('Checking uTorrent download status.')
if not self.connect():
return []
release_downloads = ReleaseDownloadList(self)
data = self.utorrent_api.get_status()
if not data:
log.error('Error getting data from uTorrent')
return []
queue = json.loads(data)
if queue.get('error'):
log.error('Error getting data from uTorrent: %s', queue.get('error'))
return []
if not queue.get('torrents'):
log.debug('Nothing in queue')
return []
# Get torrents
for torrent in queue['torrents']:
if torrent[0] in ids:
#Get files of the torrent
torrent_files = []
try:
torrent_files = json.loads(self.utorrent_api.get_files(torrent[0]))
torrent_files = [sp(os.path.join(torrent[26], torrent_file[0])) for torrent_file in torrent_files['files'][1]]
except:
log.debug('Failed getting files from torrent: %s', torrent[2])
status = 'busy'
if (torrent[1] & self.status_flags['STARTED'] or torrent[1] & self.status_flags['QUEUED']) and torrent[4] == 1000:
status = 'seeding'
elif torrent[1] & self.status_flags['ERROR']:
status = 'failed'
elif torrent[4] == 1000:
status = 'completed'
if not status == 'busy':
self.removeReadOnly(torrent_files)
release_downloads.append({
'id': torrent[0],
'name': torrent[2],
'status': status,
'seed_ratio': float(torrent[7]) / 1000,
'original_status': torrent[1],
'timeleft': str(timedelta(seconds = torrent[10])),
'folder': sp(torrent[26]),
'files': torrent_files
})
return release_downloads
def pause(self, release_download, pause = True):
if not self.connect():
return False
return self.utorrent_api.pause_torrent(release_download['id'], pause)
def removeFailed(self, release_download):
log.info('%s failed downloading, deleting...', release_download['name'])
if not self.connect():
return False
return self.utorrent_api.remove_torrent(release_download['id'], remove_data = True)
def processComplete(self, release_download, delete_files = False):
log.debug('Requesting uTorrent to remove the torrent %s%s.', (release_download['name'], ' and cleanup the downloaded files' if delete_files else ''))
if not self.connect():
return False
return self.utorrent_api.remove_torrent(release_download['id'], remove_data = delete_files)
def removeReadOnly(self, files):
#Removes all read-on ly flags in a for all files
for filepath in files:
if os.path.isfile(filepath):
#Windows only needs S_IWRITE, but we bitwise-or with current perms to preserve other permission bits on Linux
os.chmod(filepath, stat.S_IWRITE | os.stat(filepath).st_mode)
class uTorrentAPI(object):
def __init__(self, host = 'localhost', port = 8000, username = None, password = None):
super(uTorrentAPI, self).__init__()
self.url = 'http://' + str(host) + ':' + str(port) + '/gui/'
self.token = ''
self.last_time = time.time()
cookies = cookielib.CookieJar()
self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies), MultipartPostHandler)
self.opener.addheaders = [('User-agent', 'couchpotato-utorrent-client/1.0')]
if username and password:
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(realm = None, uri = self.url, user = username, passwd = password)
self.opener.add_handler(urllib2.HTTPBasicAuthHandler(password_manager))
elif username or password:
log.debug('User or password missing, not using authentication.')
self.token = self.get_token()
def _request(self, action, data = None):
if time.time() > self.last_time + 1800:
self.last_time = time.time()
self.token = self.get_token()
request = urllib2.Request(self.url + '?token=' + self.token + '&' + action, data)
try:
open_request = self.opener.open(request)
response = open_request.read()
if response:
return response
else:
log.debug('Unknown failure sending command to uTorrent. Return text is: %s', response)
except httplib.InvalidURL as err:
log.error('Invalid uTorrent host, check your config %s', err)
except urllib2.HTTPError as err:
if err.code == 401:
log.error('Invalid uTorrent Username or Password, check your config')
else:
log.error('uTorrent HTTPError: %s', err)
except urllib2.URLError as err:
log.error('Unable to connect to uTorrent %s', err)
return False
def get_token(self):
request = self.opener.open(self.url + 'token.html')
token = re.findall('<div.*?>(.*?)</', request.read())[0]
return token
def add_torrent_uri(self, filename, torrent, add_folder = False):
action = 'action=add-url&s=%s' % urllib.quote(torrent)
if add_folder:
action += '&path=%s' % urllib.quote(filename)
return self._request(action)
def add_torrent_file(self, filename, filedata, add_folder = False):
action = 'action=add-file'
if add_folder:
action += '&path=%s' % urllib.quote(filename)
return self._request(action, {'torrent_file': (ss(filename), filedata)})
def set_torrent(self, hash, params):
action = 'action=setprops&hash=%s' % hash
for k, v in params.items():
action += '&s=%s&v=%s' % (k, v)
return self._request(action)
def pause_torrent(self, hash, pause = True):
if pause:
action = 'action=pause&hash=%s' % hash
else:
action = 'action=unpause&hash=%s' % hash
return self._request(action)
def stop_torrent(self, hash):
action = 'action=stop&hash=%s' % hash
return self._request(action)
def remove_torrent(self, hash, remove_data = False):
if remove_data:
action = 'action=removedata&hash=%s' % hash
else:
action = 'action=remove&hash=%s' % hash
return self._request(action)
def get_status(self):
action = 'list=1'
return self._request(action)
def get_settings(self):
action = 'action=getsettings'
settings_dict = {}
try:
utorrent_settings = json.loads(self._request(action))
# Create settings dict
for setting in utorrent_settings['settings']:
if setting[1] == 0: # int
settings_dict[setting[0]] = int(setting[2] if not setting[2].strip() == '' else '0')
elif setting[1] == 1: # bool
settings_dict[setting[0]] = True if setting[2] == 'true' else False
elif setting[1] == 2: # string
settings_dict[setting[0]] = setting[2]
#log.debug('uTorrent settings: %s', settings_dict)
except Exception as err:
log.error('Failed to get settings from uTorrent: %s', err)
return settings_dict
def set_settings(self, settings_dict = None):
if not settings_dict: settings_dict = {}
for key in settings_dict:
if isinstance(settings_dict[key], bool):
settings_dict[key] = 1 if settings_dict[key] else 0
action = 'action=setsetting' + ''.join(['&s=%s&v=%s' % (key, value) for (key, value) in settings_dict.items()])
return self._request(action)
def get_files(self, hash):
action = 'action=getfiles&hash=%s' % hash
return self._request(action)
def get_build(self):
data = self._request('')
if not data:
return False
response = json.loads(data)
return int(response.get('build'))
config = [{
'name': 'utorrent',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'utorrent',
'label': 'uTorrent',
'description': 'Use <a href="http://www.utorrent.com/" target="_blank">uTorrent</a> (3.0+) to download torrents.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:8000',
'description': 'Port can be found in settings when enabling WebUI.',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'label',
'description': 'Label to add torrent as.',
},
{
'name': 'remove_complete',
'label': 'Remove torrent',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Remove the torrent from uTorrent after it finished seeding.',
},
{
'name': 'delete_files',
'label': 'Remove files',
'default': True,
'type': 'bool',
'advanced': True,
'description': 'Also remove the leftover files.',
},
{
'name': 'paused',
'type': 'bool',
'advanced': True,
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
{
'name': 'delete_failed',
'default': True,
'advanced': True,
'type': 'bool',
'description': 'Delete a release after the download has failed.',
},
],
}
],
}]

View File

@@ -1,55 +0,0 @@
from .main import uTorrent
def start():
return uTorrent()
config = [{
'name': 'utorrent',
'groups': [
{
'tab': 'downloaders',
'list': 'download_providers',
'name': 'utorrent',
'label': 'uTorrent',
'description': 'Use <a href="http://www.utorrent.com/" target="_blank">uTorrent</a> to download torrents.',
'wizard': True,
'options': [
{
'name': 'enabled',
'default': 0,
'type': 'enabler',
'radio_group': 'torrent',
},
{
'name': 'host',
'default': 'localhost:8000',
'description': 'Hostname with port. Usually <strong>localhost:8000</strong>',
},
{
'name': 'username',
},
{
'name': 'password',
'type': 'password',
},
{
'name': 'label',
'description': 'Label to add torrent as.',
},
{
'name': 'paused',
'type': 'bool',
'default': False,
'description': 'Add the torrent paused.',
},
{
'name': 'manual',
'default': 0,
'type': 'bool',
'advanced': True,
'description': 'Disable this downloader for automated searches, but use it when I manually send a release.',
},
],
}
],
}]

View File

@@ -1,135 +0,0 @@
from bencode import bencode, bdecode
from couchpotato.core.downloaders.base import Downloader
from couchpotato.core.helpers.encoding import isInt, ss
from couchpotato.core.logger import CPLog
from hashlib import sha1
from multipartpost import MultipartPostHandler
import cookielib
import httplib
import re
import time
import urllib
import urllib2
log = CPLog(__name__)
class uTorrent(Downloader):
type = ['torrent', 'torrent_magnet']
utorrent_api = None
def download(self, data, movie, filedata = None):
log.debug('Sending "%s" (%s) to uTorrent.', (data.get('name'), data.get('type')))
# Load host from config and split out port.
host = self.conf('host').split(':')
if not isInt(host[1]):
log.error('Config properties are not filled in correctly, port is missing.')
return False
torrent_params = {}
if self.conf('label'):
torrent_params['label'] = self.conf('label')
if not filedata and data.get('type') == 'torrent':
log.error('Failed sending torrent, no data')
return False
if data.get('type') == 'torrent_magnet':
torrent_hash = re.findall('urn:btih:([\w]{32,40})', data.get('url'))[0].upper()
torrent_params['trackers'] = '%0D%0A%0D%0A'.join(self.torrent_trackers)
else:
info = bdecode(filedata)["info"]
torrent_hash = sha1(bencode(info)).hexdigest().upper()
torrent_filename = self.createFileName(data, filedata, movie)
# Send request to uTorrent
try:
if not self.utorrent_api:
self.utorrent_api = uTorrentAPI(host[0], port = host[1], username = self.conf('username'), password = self.conf('password'))
if data.get('type') == 'torrent_magnet':
self.utorrent_api.add_torrent_uri(data.get('url'))
else:
self.utorrent_api.add_torrent_file(torrent_filename, filedata)
# Change settings of added torrents
self.utorrent_api.set_torrent(torrent_hash, torrent_params)
if self.conf('paused', default = 0):
self.utorrent_api.pause_torrent(torrent_hash)
return True
except Exception, err:
log.error('Failed to send torrent to uTorrent: %s', err)
return False
class uTorrentAPI(object):
def __init__(self, host = 'localhost', port = 8000, username = None, password = None):
super(uTorrentAPI, self).__init__()
self.url = 'http://' + str(host) + ':' + str(port) + '/gui/'
self.token = ''
self.last_time = time.time()
cookies = cookielib.CookieJar()
self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(cookies), MultipartPostHandler)
self.opener.addheaders = [('User-agent', 'couchpotato-utorrent-client/1.0')]
if username and password:
password_manager = urllib2.HTTPPasswordMgrWithDefaultRealm()
password_manager.add_password(realm = None, uri = self.url, user = username, passwd = password)
self.opener.add_handler(urllib2.HTTPBasicAuthHandler(password_manager))
self.opener.add_handler(urllib2.HTTPDigestAuthHandler(password_manager))
elif username or password:
log.debug('User or password missing, not using authentication.')
self.token = self.get_token()
def _request(self, action, data = None):
if time.time() > self.last_time + 1800:
self.last_time = time.time()
self.token = self.get_token()
request = urllib2.Request(self.url + "?token=" + self.token + "&" + action, data)
try:
open_request = self.opener.open(request)
response = open_request.read()
log.debug('response: %s', response)
if response:
log.debug('uTorrent action successfull')
return response
else:
log.debug('Unknown failure sending command to uTorrent. Return text is: %s', response)
except httplib.InvalidURL, err:
log.error('Invalid uTorrent host, check your config %s', err)
except urllib2.HTTPError, err:
if err.code == 401:
log.error('Invalid uTorrent Username or Password, check your config')
else:
log.error('uTorrent HTTPError: %s', err)
except urllib2.URLError, err:
log.error('Unable to connect to uTorrent %s', err)
return False
def get_token(self):
request = self.opener.open(self.url + "token.html")
token = re.findall("<div.*?>(.*?)</", request.read())[0]
return token
def add_torrent_uri(self, torrent):
action = "action=add-url&s=%s" % urllib.quote(torrent)
return self._request(action)
def add_torrent_file(self, filename, filedata):
action = "action=add-file"
return self._request(action, {"torrent_file": (ss(filename), filedata)})
def set_torrent(self, hash, params):
action = "action=setprops&hash=%s" % hash
for k, v in params.iteritems():
action += "&s=%s&v=%s" % (k, v)
return self._request(action)
def pause_torrent(self, hash):
action = "action=pause&hash=%s" % hash
return self._request(action)

View File

@@ -1,12 +1,15 @@
from axl.axel import Event
from couchpotato.core.helpers.variable import mergeDicts, natcmp
from couchpotato.core.logger import CPLog
import threading
import traceback
from axl.axel import Event
from couchpotato.core.helpers.variable import mergeDicts, natsortKey
from couchpotato.core.logger import CPLog
log = CPLog(__name__)
events = {}
def runHandler(name, handler, *args, **kwargs):
try:
return handler(*args, **kwargs)
@@ -14,44 +17,54 @@ def runHandler(name, handler, *args, **kwargs):
from couchpotato.environment import Env
log.error('Error in event "%s", that wasn\'t caught: %s%s', (name, traceback.format_exc(), Env.all() if not Env.get('dev') else ''))
def addEvent(name, handler, priority = 100):
if events.get(name):
e = events[name]
else:
e = events[name] = Event(name = name, threads = 10, exc_info = True, traceback = True, lock = threading.RLock())
if not events.get(name):
events[name] = []
def createHandle(*args, **kwargs):
h = None
try:
parent = handler.im_self
bc = hasattr(parent, 'beforeCall')
if bc: parent.beforeCall(handler)
# Open handler
has_parent = hasattr(handler, 'im_self')
parent = None
if has_parent:
parent = handler.__self__
bc = hasattr(parent, 'beforeCall')
if bc: parent.beforeCall(handler)
# Main event
h = runHandler(name, handler, *args, **kwargs)
ac = hasattr(parent, 'afterCall')
if ac: parent.afterCall(handler)
# Close handler
if parent and has_parent:
ac = hasattr(parent, 'afterCall')
if ac: parent.afterCall(handler)
except:
h = runHandler(name, handler, *args, **kwargs)
log.error('Failed creating handler %s %s: %s', (name, handler, traceback.format_exc()))
return h
e.handle(createHandle, priority = priority)
events[name].append({
'handler': createHandle,
'priority': priority,
})
def removeEvent(name, handler):
e = events[name]
e -= handler
def fireEvent(name, *args, **kwargs):
if not events.get(name): return
if name not in events: return
#log.debug('Firing event %s', name)
try:
options = {
'is_after_event': False, # Fire after event
'on_complete': False, # onComplete event
'single': False, # Return single handler
'merge': False, # Merge items
'in_order': False, # Fire them in specific order, waits for the other to finish
'is_after_event': False, # Fire after event
'on_complete': False, # onComplete event
'single': False, # Return single handler
'merge': False, # Merge items
'in_order': False, # Fire them in specific order, waits for the other to finish
}
# Do options
@@ -62,28 +75,41 @@ def fireEvent(name, *args, **kwargs):
options[x] = val
except: pass
e = events[name]
if len(events[name]) == 1:
# Lock this event
e.lock.acquire()
single = None
try:
single = events[name][0]['handler'](*args, **kwargs)
except:
log.error('Failed running single event: %s', traceback.format_exc())
e.asynchronous = False
# Don't load thread for single event
result = {
'single': (single is not None, single),
}
# Make sure only 1 event is fired at a time when order is wanted
kwargs['event_order_lock'] = threading.RLock() if options['in_order'] or options['single'] else None
kwargs['event_return_on_result'] = options['single']
else:
# Fire
result = e(*args, **kwargs)
e = Event(name = name, threads = 10, exc_info = True, traceback = True, lock = threading.RLock())
# Release lock for this event
e.lock.release()
for event in events[name]:
e.handle(event['handler'], priority = event['priority'])
# Make sure only 1 event is fired at a time when order is wanted
kwargs['event_order_lock'] = threading.RLock() if options['in_order'] or options['single'] else None
kwargs['event_return_on_result'] = options['single']
# Fire
result = e(*args, **kwargs)
result_keys = result.keys()
result_keys.sort(key = natsortKey)
if options['single'] and not options['merge']:
results = None
# Loop over results, stop when first not None result is found.
for r_key in sorted(result.iterkeys(), cmp = natcmp):
for r_key in result_keys:
r = result[r_key]
if r[0] is True and r[1] is not None:
results = r[1]
@@ -95,7 +121,7 @@ def fireEvent(name, *args, **kwargs):
else:
results = []
for r_key in sorted(result.iterkeys(), cmp = natcmp):
for r_key in result_keys:
r = result[r_key]
if r[0] == True and r[1]:
results.append(r[1])
@@ -104,11 +130,14 @@ def fireEvent(name, *args, **kwargs):
# Merge
if options['merge'] and len(results) > 0:
# Dict
if isinstance(results[0], dict):
results.reverse()
merged = {}
for result in results:
merged = mergeDicts(merged, result)
merged = mergeDicts(merged, result, prepend_list = True)
results = merged
# Lists
@@ -132,23 +161,24 @@ def fireEvent(name, *args, **kwargs):
options['on_complete']()
return results
except KeyError, e:
pass
except Exception:
log.error('%s: %s', (name, traceback.format_exc()))
def fireEventAsync(*args, **kwargs):
try:
my_thread = threading.Thread(target = fireEvent, args = args, kwargs = kwargs)
my_thread.setDaemon(True)
my_thread.start()
t = threading.Thread(target = fireEvent, args = args, kwargs = kwargs)
t.setDaemon(True)
t.start()
return True
except Exception, e:
except Exception as e:
log.error('%s: %s', (args[0], e))
def errorHandler(error):
etype, value, tb = error
log.error(''.join(traceback.format_exception(etype, value, tb)))
def getEvent(name):
return events[name]

View File

@@ -1,17 +1,23 @@
from couchpotato.core.logger import CPLog
from string import ascii_letters, digits
from urllib import quote_plus
import os
import re
import traceback
import unicodedata
from couchpotato.core.logger import CPLog
import six
log = CPLog(__name__)
def toSafeString(original):
valid_chars = "-_.() %s%s" % (ascii_letters, digits)
cleanedFilename = unicodedata.normalize('NFKD', toUnicode(original)).encode('ASCII', 'ignore')
return ''.join(c for c in cleanedFilename if c in valid_chars)
cleaned_filename = unicodedata.normalize('NFKD', toUnicode(original)).encode('ASCII', 'ignore')
valid_string = ''.join(c for c in cleaned_filename if c in valid_chars)
return ' '.join(valid_string.split())
def simplifyString(original):
string = stripAccents(original.lower())
@@ -19,13 +25,14 @@ def simplifyString(original):
split = re.split('\W+|_', string.lower())
return toUnicode(' '.join(split))
def toUnicode(original, *args):
try:
if isinstance(original, unicode):
return original
else:
try:
return unicode(original, *args)
return six.text_type(original, *args)
except:
try:
return ek(original, *args)
@@ -36,9 +43,43 @@ def toUnicode(original, *args):
ascii_text = str(original).encode('string_escape')
return toUnicode(ascii_text)
def ss(original, *args):
from couchpotato.environment import Env
return toUnicode(original, *args).encode(Env.get('encoding'))
u_original = toUnicode(original, *args)
try:
from couchpotato.environment import Env
return u_original.encode(Env.get('encoding'))
except Exception as e:
log.debug('Failed ss encoding char, force UTF8: %s', e)
return u_original.encode('UTF-8')
def sp(path, *args):
# Standardise encoding, normalise case, path and strip trailing '/' or '\'
if not path or len(path) == 0:
return path
# convert windows path (from remote box) to *nix path
if os.path.sep == '/' and '\\' in path:
path = '/' + path.replace(':', '').replace('\\', '/')
path = os.path.normpath(ss(path, *args))
# Remove any trailing path separators
if path != os.path.sep:
path = path.rstrip(os.path.sep)
# Add a trailing separator in case it is a root folder on windows (crashes guessit)
if len(path) == 2 and path[1] == ':':
path = path + os.path.sep
# Replace *NIX ambiguous '//' at the beginning of a path with '/' (crashes guessit)
path = re.sub('^//', '/', path)
return path
def ek(original, *args):
if isinstance(original, (str, unicode)):
@@ -50,6 +91,7 @@ def ek(original, *args):
return original
def isInt(value):
try:
int(value)
@@ -57,14 +99,16 @@ def isInt(value):
except ValueError:
return False
def stripAccents(s):
return ''.join((c for c in unicodedata.normalize('NFD', toUnicode(s)) if unicodedata.category(c) != 'Mn'))
def tryUrlencode(s):
new = u''
if isinstance(s, (dict)):
for key, value in s.iteritems():
new += u'&%s=%s' % (key, tryUrlencode(value))
new = six.u('')
if isinstance(s, dict):
for key, value in s.items():
new += six.u('&%s=%s') % (key, tryUrlencode(value))
return new[1:]
else:

View File

@@ -1,19 +1,21 @@
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import natcmp
from flask.globals import current_app
from flask.helpers import json, make_response
from urllib import unquote
from werkzeug.urls import url_decode
import flask
import re
def getParams():
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import natsortKey
def getParams(params):
params = url_decode(getattr(flask.request, 'environ').get('QUERY_STRING', ''))
reg = re.compile('^[a-z0-9_\.]+$')
current = temp = {}
for param, value in sorted(params.iteritems()):
# Sort keys
param_keys = params.keys()
param_keys.sort(key = natsortKey)
temp = {}
for param in param_keys:
value = params[param]
nest = re.split("([\[\]]+)", param)
if len(nest) > 1:
@@ -36,16 +38,31 @@ def getParams():
current = current[item]
else:
temp[param] = toUnicode(unquote(value))
if temp[param].lower() in ['true', 'false']:
temp[param] = temp[param].lower() != 'false'
return dictToList(temp)
non_decimal = re.compile(r'[^\d.]+')
def dictToList(params):
if type(params) is dict:
new = {}
for x, value in params.iteritems():
for x, value in params.items():
try:
new_value = [dictToList(value[k]) for k in sorted(value.iterkeys(), cmp = natcmp)]
convert = lambda text: int(text) if text.isdigit() else text.lower()
alphanum_key = lambda key: [convert(c) for c in re.split('([0-9]+)', key)]
sorted_keys = sorted(value.keys(), key = alphanum_key)
all_ints = 0
for pnr in sorted_keys:
all_ints += 1 if non_decimal.sub('', pnr) == pnr else 0
if all_ints == len(sorted_keys):
new_value = [dictToList(value[k]) for k in sorted_keys]
else:
new_value = value
except:
new_value = value
@@ -54,29 +71,3 @@ def dictToList(params):
new = params
return new
def getParam(attr, default = None):
try:
return getParams().get(attr, default)
except:
return default
def padded_jsonify(callback, *args, **kwargs):
content = str(callback) + '(' + json.dumps(dict(*args, **kwargs)) + ')'
return getattr(current_app, 'response_class')(content, mimetype = 'text/javascript')
def jsonify(mimetype, *args, **kwargs):
content = json.dumps(dict(*args, **kwargs))
return getattr(current_app, 'response_class')(content, mimetype = mimetype)
def jsonified(*args, **kwargs):
callback = getParam('callback_func', None)
if callback:
content = padded_jsonify(callback, *args, **kwargs)
else:
content = jsonify('application/json', *args, **kwargs)
response = make_response(content)
response.cache_control.no_cache = True
return response

View File

@@ -1,12 +1,15 @@
from couchpotato.core.logger import CPLog
import xml.etree.ElementTree as XMLTree
from couchpotato.core.logger import CPLog
log = CPLog(__name__)
class RSS(object):
def getTextElements(self, xml, path):
''' Find elements and return tree'''
""" Find elements and return tree"""
textelements = []
try:
@@ -28,7 +31,7 @@ class RSS(object):
return elements
def getElement(self, xml, path):
''' Find element and return text'''
""" Find element and return text"""
try:
return xml.find(path)
@@ -36,7 +39,7 @@ class RSS(object):
return
def getTextElement(self, xml, path):
''' Find element and return text'''
""" Find element and return text"""
try:
return xml.find(path).text
@@ -46,6 +49,6 @@ class RSS(object):
def getItems(self, data, path = 'channel/item'):
try:
return XMLTree.parse(data).findall(path)
except Exception, e:
except Exception as e:
log.error('Error parsing RSS. %s', e)
return []

View File

@@ -1,15 +1,43 @@
from couchpotato.core.helpers.encoding import simplifyString, toSafeString
from couchpotato.core.logger import CPLog
import collections
import ctypes
import hashlib
import os.path
import os
import platform
import random
import re
import string
import sys
import traceback
from couchpotato.core.helpers.encoding import simplifyString, toSafeString, ss, sp
from couchpotato.core.logger import CPLog
import six
from six.moves import map, zip, filter
log = CPLog(__name__)
def fnEscape(pattern):
return pattern.replace('[', '[[').replace(']', '[]]').replace('[[', '[[]')
def link(src, dst):
if os.name == 'nt':
import ctypes
if ctypes.windll.kernel32.CreateHardLinkW(six.text_type(dst), six.text_type(src), 0) == 0: raise ctypes.WinError()
else:
os.link(src, dst)
def symlink(src, dst):
if os.name == 'nt':
import ctypes
if ctypes.windll.kernel32.CreateSymbolicLinkW(six.text_type(dst), six.text_type(src), 1 if os.path.isdir(src) else 0) in [0, 1280]: raise ctypes.WinError()
else:
os.symlink(src, dst)
def getUserDir():
try:
import pwd
@@ -19,6 +47,7 @@ def getUserDir():
return os.path.expanduser('~')
def getDownloadDir():
user_dir = getUserDir()
@@ -31,6 +60,7 @@ def getDownloadDir():
return user_dir
def getDataDir():
# Windows
@@ -50,10 +80,12 @@ def getDataDir():
# Linux
return os.path.join(user_dir, '.couchpotato')
def isDict(object):
return isinstance(object, dict)
def mergeDicts(a, b):
def isDict(obj):
return isinstance(obj, dict)
def mergeDicts(a, b, prepend_list = False):
assert isDict(a), isDict(b)
dst = a.copy()
@@ -67,12 +99,13 @@ def mergeDicts(a, b):
if isDict(current_src[key]) and isDict(current_dst[key]):
stack.append((current_dst[key], current_src[key]))
elif isinstance(current_src[key], list) and isinstance(current_dst[key], list):
current_dst[key].extend(current_src[key])
current_dst[key] = current_src[key] + current_dst[key] if prepend_list else current_dst[key] + current_src[key]
current_dst[key] = removeListDuplicates(current_dst[key])
else:
current_dst[key] = current_src[key]
return dst
def removeListDuplicates(seq):
checked = []
for e in seq:
@@ -80,31 +113,79 @@ def removeListDuplicates(seq):
checked.append(e)
return checked
def flattenList(l):
if isinstance(l, list):
return sum(map(flattenList, l))
else:
return l
def md5(text):
return hashlib.md5(text).hexdigest()
return hashlib.md5(ss(text)).hexdigest()
def sha1(text):
return hashlib.sha1(text).hexdigest()
def isLocalIP(ip):
ip = ip.lstrip('htps:/')
regex = '/(^127\.)|(^192\.168\.)|(^10\.)|(^172\.1[6-9]\.)|(^172\.2[0-9]\.)|(^172\.3[0-1]\.)|(^::1)$/'
return re.search(regex, ip) is not None or 'localhost' in ip or ip[:4] == '127.'
def getExt(filename):
return os.path.splitext(filename)[1][1:]
def cleanHost(host):
if not host.startswith(('http://', 'https://')):
host = 'http://' + host
if not host.endswith('/'):
def cleanHost(host, protocol = True, ssl = False, username = None, password = None):
"""Return a cleaned up host with given url options set
Changes protocol to https if ssl is set to True and http if ssl is set to false.
>>> cleanHost("localhost:80", ssl=True)
'https://localhost:80/'
>>> cleanHost("localhost:80", ssl=False)
'http://localhost:80/'
Username and password is managed with the username and password variables
>>> cleanHost("localhost:80", username="user", password="passwd")
'http://user:passwd@localhost:80/'
Output without scheme (protocol) can be forced with protocol=False
>>> cleanHost("localhost:80", protocol=False)
'localhost:80'
"""
if not '://' in host and protocol:
host = ('https://' if ssl else 'http://') + host
if not protocol:
host = host.split('://', 1)[-1]
if protocol and username and password:
try:
auth = re.findall('^(?:.+?//)(.+?):(.+?)@(?:.+)$', host)
if auth:
log.error('Cleanhost error: auth already defined in url: %s, please remove BasicAuth from url.', host)
else:
host = host.replace('://', '://%s:%s@' % (username, password), 1)
except:
pass
host = host.rstrip('/ ')
if protocol:
host += '/'
return host
def getImdb(txt, check_inside = True, multiple = False):
def getImdb(txt, check_inside = False, multiple = False):
if not check_inside:
txt = simplifyString(txt)
else:
txt = ss(txt)
if check_inside and os.path.isfile(txt):
output = open(txt, 'r')
@@ -112,60 +193,190 @@ def getImdb(txt, check_inside = True, multiple = False):
output.close()
try:
ids = re.findall('(tt\d{7})', txt)
ids = re.findall('(tt\d{4,7})', txt)
if multiple:
return ids if len(ids) > 0 else []
return ids[0]
return removeDuplicate(['tt%07d' % tryInt(x[2:]) for x in ids]) if len(ids) > 0 else []
return 'tt%07d' % tryInt(ids[0][2:])
except IndexError:
pass
return False
def tryInt(s):
def tryInt(s, default = 0):
try: return int(s)
except: return 0
except: return default
def tryFloat(s):
try: return float(s) if '.' in s else tryInt(s)
try:
if isinstance(s, str):
return float(s) if '.' in s else tryInt(s)
else:
return float(s)
except: return 0
def natsortKey(s):
return map(tryInt, re.findall(r'(\d+|\D+)', s))
def natcmp(a, b):
return cmp(natsortKey(a), natsortKey(b))
def natsortKey(string_):
"""See http://www.codinghorror.com/blog/archives/001018.html"""
return [int(s) if s.isdigit() else s for s in re.split(r'(\d+)', string_)]
def getTitle(library_dict):
def toIterable(value):
if isinstance(value, collections.Iterable):
return value
return [value]
def getIdentifier(media):
return media.get('identifier') or media.get('identifiers', {}).get('imdb')
def getTitle(media_dict):
try:
try:
return library_dict['titles'][0]['title']
return media_dict['title']
except:
try:
for title in library_dict.titles:
if title.default:
return title.title
return media_dict['titles'][0]
except:
log.error('Could not get title for %s', library_dict.identifier)
return None
log.error('Could not get title for %s', library_dict['identifier'])
return None
try:
return media_dict['info']['titles'][0]
except:
try:
return media_dict['media']['info']['titles'][0]
except:
log.error('Could not get title for %s', getIdentifier(media_dict))
return None
except:
log.error('Could not get title for library item: %s', library_dict)
log.error('Could not get title for library item: %s', media_dict)
return None
def possibleTitles(raw_title):
titles = []
titles = [
toSafeString(raw_title).lower(),
raw_title.lower(),
simplifyString(raw_title)
]
titles.append(toSafeString(raw_title).lower())
titles.append(raw_title.lower())
titles.append(simplifyString(raw_title))
# replace some chars
new_title = raw_title.replace('&', 'and')
titles.append(simplifyString(new_title))
return removeDuplicate(titles)
return list(set(titles))
def randomString(size = 8, chars = string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for x in range(size))
def splitString(str, split_on = ','):
return [x.strip() for x in str.split(split_on)]
def splitString(str, split_on = ',', clean = True):
l = [x.strip() for x in str.split(split_on)] if str else []
return removeEmpty(l) if clean else l
def removeEmpty(l):
return list(filter(None, l))
def removeDuplicate(l):
seen = set()
return [x for x in l if x not in seen and not seen.add(x)]
def dictIsSubset(a, b):
return all([k in b and b[k] == v for k, v in a.items()])
# Returns True if sub_folder is the same as or inside base_folder
def isSubFolder(sub_folder, base_folder):
if base_folder and sub_folder:
base = sp(os.path.realpath(base_folder)) + os.path.sep
subfolder = sp(os.path.realpath(sub_folder)) + os.path.sep
return os.path.commonprefix([subfolder, base]) == base
return False
# From SABNZBD
re_password = [re.compile(r'(.+){{([^{}]+)}}$'), re.compile(r'(.+)\s+password\s*=\s*(.+)$', re.I)]
def scanForPassword(name):
m = None
for reg in re_password:
m = reg.search(name)
if m: break
if m:
return m.group(1).strip('. '), m.group(2).strip()
under_pat = re.compile(r'_([a-z])')
def underscoreToCamel(name):
return under_pat.sub(lambda x: x.group(1).upper(), name)
def removePyc(folder, only_excess = True, show_logs = True):
folder = sp(folder)
for root, dirs, files in os.walk(folder):
pyc_files = filter(lambda filename: filename.endswith('.pyc'), files)
py_files = set(filter(lambda filename: filename.endswith('.py'), files))
excess_pyc_files = filter(lambda pyc_filename: pyc_filename[:-1] not in py_files, pyc_files) if only_excess else pyc_files
for excess_pyc_file in excess_pyc_files:
full_path = os.path.join(root, excess_pyc_file)
if show_logs: log.debug('Removing old PYC file: %s', full_path)
try:
os.remove(full_path)
except:
log.error('Couldn\'t remove %s: %s', (full_path, traceback.format_exc()))
for dir_name in dirs:
full_path = os.path.join(root, dir_name)
if len(os.listdir(full_path)) == 0:
try:
os.rmdir(full_path)
except:
log.error('Couldn\'t remove empty directory %s: %s', (full_path, traceback.format_exc()))
def getFreeSpace(directories):
single = not isinstance(directories, (tuple, list))
if single:
directories = [directories]
free_space = {}
for folder in directories:
size = None
if os.path.isdir(folder):
if os.name == 'nt':
_, total, free = ctypes.c_ulonglong(), ctypes.c_ulonglong(), \
ctypes.c_ulonglong()
if sys.version_info >= (3,) or isinstance(folder, unicode):
fun = ctypes.windll.kernel32.GetDiskFreeSpaceExW #@UndefinedVariable
else:
fun = ctypes.windll.kernel32.GetDiskFreeSpaceExA #@UndefinedVariable
ret = fun(folder, ctypes.byref(_), ctypes.byref(total), ctypes.byref(free))
if ret == 0:
raise ctypes.WinError()
return [total.value, free.value]
else:
s = os.statvfs(folder)
size = [s.f_blocks * s.f_frsize / (1024 * 1024), (s.f_bavail * s.f_frsize) / (1024 * 1024)]
if single: return size
free_space[folder] = size
return free_space

View File

@@ -1,59 +1,71 @@
import os
import sys
import traceback
from couchpotato.core.event import fireEvent
from couchpotato.core.logger import CPLog
import glob
import os
import traceback
from importhelper import import_module
import six
log = CPLog(__name__)
class Loader(object):
plugins = {}
providers = {}
modules = {}
def __init__(self):
self.plugins = {}
self.providers = {}
self.modules = {}
self.paths = {}
def preload(self, root = ''):
core = os.path.join(root, 'couchpotato', 'core')
self.paths = {
self.paths.update({
'core': (0, 'couchpotato.core._base', os.path.join(core, '_base')),
'plugin': (1, 'couchpotato.core.plugins', os.path.join(core, 'plugins')),
'notifications': (20, 'couchpotato.core.notifications', os.path.join(core, 'notifications')),
'downloaders': (20, 'couchpotato.core.downloaders', os.path.join(core, 'downloaders')),
}
})
# Add providers to loader
provider_dir = os.path.join(root, 'couchpotato', 'core', 'providers')
for provider in os.listdir(provider_dir):
path = os.path.join(provider_dir, provider)
if os.path.isdir(path):
self.paths[provider + '_provider'] = (25, 'couchpotato.core.providers.' + provider, path)
# Add media to loader
self.addPath(root, ['couchpotato', 'core', 'media'], 25, recursive = True)
# Add custom plugin folder
from couchpotato.environment import Env
custom_plugin_dir = os.path.join(Env.get('data_dir'), 'custom_plugins')
if os.path.isdir(custom_plugin_dir):
sys.path.insert(0, custom_plugin_dir)
self.paths['custom_plugins'] = (30, '', custom_plugin_dir)
for plugin_type, plugin_tuple in self.paths.iteritems():
# Loop over all paths and add to module list
for plugin_type, plugin_tuple in self.paths.items():
priority, module, dir_name = plugin_tuple
self.addFromDir(plugin_type, priority, module, dir_name)
def run(self):
did_save = 0
for priority in self.modules:
for module_name, plugin in sorted(self.modules[priority].iteritems()):
for priority in sorted(self.modules):
for module_name, plugin in sorted(self.modules[priority].items()):
# Load module
try:
m = getattr(self.loadModule(module_name), plugin.get('name'))
if plugin.get('name')[:2] == '__':
continue
log.info('Loading %s: %s', (plugin['type'], plugin['name']))
m = self.loadModule(module_name)
if m is None:
continue
# Save default settings for plugin/provider
did_save += self.loadSettings(m, module_name, save = False)
self.loadPlugins(m, plugin.get('name'))
self.loadPlugins(m, plugin.get('type'), plugin.get('name'))
except ImportError as e:
# todo:: subclass ImportError for missing requirements.
if (e.message.lower().startswith("missing")):
if e.message.lower().startswith("missing"):
log.error(e.message)
pass
# todo:: this needs to be more descriptive.
@@ -65,27 +77,40 @@ class Loader(object):
if did_save:
fireEvent('settings.save')
def addPath(self, root, base_path, priority, recursive = False):
root_path = os.path.join(root, *base_path)
for filename in os.listdir(root_path):
path = os.path.join(root_path, filename)
if os.path.isdir(path) and filename[:2] != '__':
if six.u('__init__.py') in os.listdir(path):
new_base_path = ''.join(s + '.' for s in base_path) + filename
self.paths[new_base_path.replace('.', '_')] = (priority, new_base_path, path)
if recursive:
self.addPath(root, base_path + [filename], priority, recursive = True)
def addFromDir(self, plugin_type, priority, module, dir_name):
# Load dir module
try:
m = __import__(module)
splitted = module.split('.')
for sub in splitted[1:]:
m = getattr(m, sub)
if module and len(module) > 0:
self.addModule(priority, plugin_type, module, os.path.basename(dir_name))
if hasattr(m, 'config'):
fireEvent('settings.options', splitted[-1] + '_config', getattr(m, 'config'))
except:
raise
for cur_file in glob.glob(os.path.join(dir_name, '*')):
name = os.path.basename(cur_file)
if os.path.isdir(os.path.join(dir_name, name)):
for name in os.listdir(dir_name):
path = os.path.join(dir_name, name)
ext = os.path.splitext(path)[1]
ext_length = len(ext)
if name != 'static' and ((os.path.isdir(path) and os.path.isfile(os.path.join(path, '__init__.py')))
or (os.path.isfile(path) and ext == '.py')):
name = name[:-ext_length] if ext_length > 0 else name
module_name = '%s.%s' % (module, name)
self.addModule(priority, plugin_type, module_name, name)
def loadSettings(self, module, name, save = True):
if not hasattr(module, 'config'):
#log.debug('Skip loading settings for plugin %s as it has no config section' % module.__file__)
return False
try:
for section in module.config:
fireEvent('settings.options', section['name'], section)
@@ -99,16 +124,22 @@ class Loader(object):
log.debug('Failed loading settings for "%s": %s', (name, traceback.format_exc()))
return False
def loadPlugins(self, module, name):
def loadPlugins(self, module, type, name):
if not hasattr(module, 'autoload'):
#log.debug('Skip startup for plugin %s as it has no start section' % module.__file__)
return False
try:
klass = module.start()
klass.registerPlugin()
if klass and getattr(klass, 'auto_register_static'):
klass.registerStatic(module.__file__)
# Load single file plugin
if isinstance(module.autoload, (str, unicode)):
getattr(module, module.autoload)()
# Load folder plugin
else:
module.autoload()
log.info('Loaded %s: %s', (type, name))
return True
except Exception, e:
except:
log.error('Failed loading plugin "%s": %s', (module.__file__, traceback.format_exc()))
return False
@@ -117,6 +148,10 @@ class Loader(object):
if not self.modules.get(priority):
self.modules[priority] = {}
module = module.lstrip('.')
if plugin_type.startswith('couchpotato_core'):
plugin_type = plugin_type[17:]
self.modules[priority][module] = {
'priority': priority,
'module': module,
@@ -126,10 +161,9 @@ class Loader(object):
def loadModule(self, name):
try:
m = __import__(name)
splitted = name.split('.')
for sub in splitted[1:-1]:
m = getattr(m, sub)
return m
return import_module(name)
except ImportError:
log.debug('Skip loading module plugin %s: %s', (name, traceback.format_exc()))
return None
except:
raise

View File

@@ -1,11 +1,14 @@
import logging
import re
import traceback
class CPLog(object):
context = ''
replace_private = ['api', 'apikey', 'api_key', 'password', 'username', 'h', 'uid', 'key']
replace_private = ['api', 'apikey', 'api_key', 'password', 'username', 'h', 'uid', 'key', 'passkey']
Env = None
is_develop = False
def __init__(self, context = ''):
if context.endswith('.main'):
@@ -14,6 +17,20 @@ class CPLog(object):
self.context = context
self.logger = logging.getLogger()
def setup(self):
if not self.Env:
from couchpotato.environment import Env
self.Env = Env
self.is_develop = Env.get('dev')
from couchpotato.core.event import addEvent
addEvent('app.after_shutdown', self.close)
def close(self, *args, **kwargs):
logging.shutdown()
def info(self, msg, replace_tuple = ()):
self.logger.info(self.addContext(msg, replace_tuple))
@@ -37,8 +54,7 @@ class CPLog(object):
def safeMessage(self, msg, replace_tuple = ()):
from couchpotato.environment import Env
from couchpotato.core.helpers.encoding import ss
from couchpotato.core.helpers.encoding import ss, toUnicode
msg = ss(msg)
@@ -50,10 +66,11 @@ class CPLog(object):
msg = msg % tuple([ss(x) for x in list(replace_tuple)])
else:
msg = msg % ss(replace_tuple)
except:
self.logger.error(u'Failed encoding stuff to log: %s' % traceback.format_exc())
except Exception as e:
self.logger.error('Failed encoding stuff to log "%s": %s' % (msg, e))
if not Env.get('dev'):
self.setup()
if not self.is_develop:
for replace in self.replace_private:
msg = re.sub('(\?%s=)[^\&]+' % replace, '?%s=xxx' % replace, msg)
@@ -61,10 +78,10 @@ class CPLog(object):
# Replace api key
try:
api_key = Env.setting('api_key')
api_key = self.Env.setting('api_key')
if api_key:
msg = msg.replace(api_key, 'API_KEY')
except:
pass
return msg
return toUnicode(msg)

View File

@@ -0,0 +1,98 @@
import os
import traceback
from couchpotato import CPLog
from couchpotato.core.event import addEvent, fireEvent, fireEventAsync
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.plugins.base import Plugin
import six
log = CPLog(__name__)
class MediaBase(Plugin):
_type = None
def initType(self):
addEvent('media.types', self.getType)
def getType(self):
return self._type
def createOnComplete(self, media_id):
def onComplete():
try:
media = fireEvent('media.get', media_id, single = True)
event_name = '%s.searcher.single' % media.get('type')
fireEventAsync(event_name, media, on_complete = self.createNotifyFront(media_id), manual = True)
except:
log.error('Failed creating onComplete: %s', traceback.format_exc())
return onComplete
def createNotifyFront(self, media_id):
def notifyFront():
try:
media = fireEvent('media.get', media_id, single = True)
event_name = '%s.update' % media.get('type')
fireEvent('notify.frontend', type = event_name, data = media)
except:
log.error('Failed creating onComplete: %s', traceback.format_exc())
return notifyFront
def getDefaultTitle(self, info, ):
# Set default title
default_title = toUnicode(info.get('title'))
titles = info.get('titles', [])
counter = 0
def_title = None
for title in titles:
if (len(default_title) == 0 and counter == 0) or len(titles) == 1 or title.lower() == toUnicode(default_title.lower()) or (toUnicode(default_title) == six.u('') and toUnicode(titles[0]) == title):
def_title = toUnicode(title)
break
counter += 1
if not def_title:
def_title = toUnicode(titles[0])
return def_title or 'UNKNOWN'
def getPoster(self, image_urls, existing_files):
image_type = 'poster'
# Remove non-existing files
file_type = 'image_%s' % image_type
# Make existing unique
unique_files = list(set(existing_files.get(file_type, [])))
# Remove files that can't be found
for ef in unique_files:
if not os.path.isfile(ef):
unique_files.remove(ef)
# Replace new files list
existing_files[file_type] = unique_files
if len(existing_files) == 0:
del existing_files[file_type]
# Loop over type
for image in image_urls.get(image_type, []):
if not isinstance(image, (str, unicode)):
continue
if file_type not in existing_files or len(existing_files.get(file_type, [])) == 0:
file_path = fireEvent('file.download', url = image, single = True)
if file_path:
existing_files[file_type] = [file_path]
break
else:
break

View File

@@ -0,0 +1,7 @@
from .main import Library
def autoload():
return Library()
config = []

View File

@@ -0,0 +1,13 @@
from couchpotato.core.event import addEvent
from couchpotato.core.plugins.base import Plugin
class LibraryBase(Plugin):
_type = None
def initType(self):
addEvent('library.types', self.getType)
def getType(self):
return self._type

View File

@@ -0,0 +1,18 @@
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.media._base.library.base import LibraryBase
class Library(LibraryBase):
def __init__(self):
addEvent('library.title', self.title)
def title(self, library):
return fireEvent(
'library.query',
library,
condense = False,
include_year = False,
include_identifier = False,
single = True
)

View File

@@ -0,0 +1,7 @@
from .main import Matcher
def autoload():
return Matcher()
config = []

View File

@@ -0,0 +1,84 @@
from couchpotato.core.event import addEvent
from couchpotato.core.helpers.encoding import simplifyString
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
log = CPLog(__name__)
class MatcherBase(Plugin):
type = None
def __init__(self):
if self.type:
addEvent('%s.matcher.correct' % self.type, self.correct)
def correct(self, chain, release, media, quality):
raise NotImplementedError()
def flattenInfo(self, info):
# Flatten dictionary of matches (chain info)
if isinstance(info, dict):
return dict([(key, self.flattenInfo(value)) for key, value in info.items()])
# Flatten matches
result = None
for match in info:
if isinstance(match, dict):
if result is None:
result = {}
for key, value in match.items():
if key not in result:
result[key] = []
result[key].append(value)
else:
if result is None:
result = []
result.append(match)
return result
def constructFromRaw(self, match):
if not match:
return None
parts = [
''.join([
y for y in x[1:] if y
]) for x in match
]
return ''.join(parts)[:-1].strip()
def simplifyValue(self, value):
if not value:
return value
if isinstance(value, basestring):
return simplifyString(value)
if isinstance(value, list):
return [self.simplifyValue(x) for x in value]
raise ValueError("Unsupported value type")
def chainMatch(self, chain, group, tags):
info = self.flattenInfo(chain.info[group])
found_tags = []
for tag, accepted in tags.items():
values = [self.simplifyValue(x) for x in info.get(tag, [None])]
if any([val in accepted for val in values]):
found_tags.append(tag)
log.debug('tags found: %s, required: %s' % (found_tags, tags.keys()))
if set(tags.keys()) == set(found_tags):
return True
return all([key in found_tags for key, value in tags.items()])

View File

@@ -0,0 +1,89 @@
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.variable import possibleTitles
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.matcher.base import MatcherBase
from caper import Caper
log = CPLog(__name__)
class Matcher(MatcherBase):
def __init__(self):
super(Matcher, self).__init__()
self.caper = Caper()
addEvent('matcher.parse', self.parse)
addEvent('matcher.match', self.match)
addEvent('matcher.flatten_info', self.flattenInfo)
addEvent('matcher.construct_from_raw', self.constructFromRaw)
addEvent('matcher.correct_title', self.correctTitle)
addEvent('matcher.correct_quality', self.correctQuality)
def parse(self, name, parser='scene'):
return self.caper.parse(name, parser)
def match(self, release, media, quality):
match = fireEvent('matcher.parse', release['name'], single = True)
if len(match.chains) < 1:
log.info2('Wrong: %s, unable to parse release name (no chains)', release['name'])
return False
for chain in match.chains:
if fireEvent('%s.matcher.correct' % media['type'], chain, release, media, quality, single = True):
return chain
return False
def correctTitle(self, chain, media):
root_library = media['library']['root_library']
if 'show_name' not in chain.info or not len(chain.info['show_name']):
log.info('Wrong: missing show name in parsed result')
return False
# Get the lower-case parsed show name from the chain
chain_words = [x.lower() for x in chain.info['show_name']]
# Build a list of possible titles of the media we are searching for
titles = root_library['info']['titles']
# Add year suffix titles (will result in ['<name_one>', '<name_one> <suffix_one>', '<name_two>', ...])
suffixes = [None, root_library['info']['year']]
titles = [
title + ((' %s' % suffix) if suffix else '')
for title in titles
for suffix in suffixes
]
# Check show titles match
# TODO check xem names
for title in titles:
for valid_words in [x.split(' ') for x in possibleTitles(title)]:
if valid_words == chain_words:
return True
return False
def correctQuality(self, chain, quality, quality_map):
if quality['identifier'] not in quality_map:
log.info2('Wrong: unknown preferred quality %s', quality['identifier'])
return False
if 'video' not in chain.info:
log.info2('Wrong: no video tags found')
return False
video_tags = quality_map[quality['identifier']]
if not self.chainMatch(chain, 'video', video_tags):
log.info2('Wrong: %s tags not in chain', video_tags)
return False
return True

View File

@@ -0,0 +1,5 @@
from .main import MediaPlugin
def autoload():
return MediaPlugin()

View File

@@ -0,0 +1,199 @@
from string import ascii_letters
from hashlib import md5
from CodernityDB.tree_index import MultiTreeBasedIndex, TreeBasedIndex
from couchpotato.core.helpers.encoding import toUnicode, simplifyString
class MediaIndex(MultiTreeBasedIndex):
_version = 3
custom_header = """from CodernityDB.tree_index import MultiTreeBasedIndex"""
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(MediaIndex, self).__init__(*args, **kwargs)
def make_key(self, key):
return md5(key).hexdigest()
def make_key_value(self, data):
if data.get('_t') == 'media' and (data.get('identifier') or data.get('identifiers')):
identifiers = data.get('identifiers', {})
if data.get('identifier') and 'imdb' not in identifiers:
identifiers['imdb'] = data.get('identifier')
ids = []
for x in identifiers:
ids.append(md5('%s-%s' % (x, identifiers[x])).hexdigest())
return ids, None
class MediaStatusIndex(TreeBasedIndex):
_version = 1
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(MediaStatusIndex, self).__init__(*args, **kwargs)
def make_key(self, key):
return md5(key).hexdigest()
def make_key_value(self, data):
if data.get('_t') == 'media' and data.get('status'):
return md5(data.get('status')).hexdigest(), None
class MediaTypeIndex(TreeBasedIndex):
_version = 1
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(MediaTypeIndex, self).__init__(*args, **kwargs)
def make_key(self, key):
return md5(key).hexdigest()
def make_key_value(self, data):
if data.get('_t') == 'media' and data.get('type'):
return md5(data.get('type')).hexdigest(), None
class TitleSearchIndex(MultiTreeBasedIndex):
_version = 1
custom_header = """from CodernityDB.tree_index import MultiTreeBasedIndex
from itertools import izip
from couchpotato.core.helpers.encoding import simplifyString"""
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(TitleSearchIndex, self).__init__(*args, **kwargs)
self.__l = kwargs.get('w_len', 2)
def make_key_value(self, data):
if data.get('_t') == 'media' and len(data.get('title', '')) > 0:
out = set()
title = str(simplifyString(data.get('title').lower()))
l = self.__l
title_split = title.split()
for x in range(len(title_split)):
combo = ' '.join(title_split[x:])[:32].strip()
out.add(combo.rjust(32, '_'))
combo_range = max(l, min(len(combo), 32))
for cx in range(1, combo_range):
ccombo = combo[:-cx].strip()
if len(ccombo) > l:
out.add(ccombo.rjust(32, '_'))
return out, None
def make_key(self, key):
return key.rjust(32, '_').lower()
class TitleIndex(TreeBasedIndex):
_version = 4
custom_header = """from CodernityDB.tree_index import TreeBasedIndex
from string import ascii_letters
from couchpotato.core.helpers.encoding import toUnicode, simplifyString"""
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(TitleIndex, self).__init__(*args, **kwargs)
def make_key(self, key):
return self.simplify(key)
def make_key_value(self, data):
if data.get('_t') == 'media' and data.get('title') is not None and len(data.get('title')) > 0:
return self.simplify(data['title']), None
def simplify(self, title):
title = toUnicode(title)
nr_prefix = '' if title and len(title) > 0 and title[0] in ascii_letters else '#'
title = simplifyString(title)
for prefix in ['the ', 'an ', 'a ']:
if prefix == title[:len(prefix)]:
title = title[len(prefix):]
break
return str(nr_prefix + title).ljust(32, ' ')[:32]
class StartsWithIndex(TreeBasedIndex):
_version = 3
custom_header = """from CodernityDB.tree_index import TreeBasedIndex
from string import ascii_letters
from couchpotato.core.helpers.encoding import toUnicode, simplifyString"""
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '1s'
super(StartsWithIndex, self).__init__(*args, **kwargs)
def make_key(self, key):
return self.first(key)
def make_key_value(self, data):
if data.get('_t') == 'media' and data.get('title') is not None:
return self.first(data['title']), None
def first(self, title):
title = toUnicode(title)
title = simplifyString(title)
for prefix in ['the ', 'an ', 'a ']:
if prefix == title[:len(prefix)]:
title = title[len(prefix):]
break
return str(title[0] if title and len(title) > 0 and title[0] in ascii_letters else '#').lower()
class MediaChildrenIndex(TreeBasedIndex):
_version = 1
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(MediaChildrenIndex, self).__init__(*args, **kwargs)
def make_key(self, key):
return key
def make_key_value(self, data):
if data.get('_t') == 'media' and data.get('parent_id'):
return data.get('parent_id'), None
class MediaTagIndex(MultiTreeBasedIndex):
_version = 2
custom_header = """from CodernityDB.tree_index import MultiTreeBasedIndex"""
def __init__(self, *args, **kwargs):
kwargs['key_format'] = '32s'
super(MediaTagIndex, self).__init__(*args, **kwargs)
def make_key_value(self, data):
if data.get('_t') == 'media' and data.get('tags') and len(data.get('tags', [])) > 0:
tags = set()
for tag in data.get('tags', []):
tags.add(self.make_key(tag))
return list(tags), None
def make_key(self, key):
return md5(key).hexdigest()

View File

@@ -0,0 +1,533 @@
from datetime import timedelta
from operator import itemgetter
import time
import traceback
from string import ascii_lowercase
from CodernityDB.database import RecordNotFound
from couchpotato import tryInt, get_db
from couchpotato.api import addApiView
from couchpotato.core.event import fireEvent, fireEventAsync, addEvent
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import splitString, getImdb, getTitle
from couchpotato.core.logger import CPLog
from couchpotato.core.media import MediaBase
from .index import MediaIndex, MediaStatusIndex, MediaTypeIndex, TitleSearchIndex, TitleIndex, StartsWithIndex, MediaChildrenIndex, MediaTagIndex
log = CPLog(__name__)
class MediaPlugin(MediaBase):
_database = {
'media': MediaIndex,
'media_search_title': TitleSearchIndex,
'media_status': MediaStatusIndex,
'media_tag': MediaTagIndex,
'media_by_type': MediaTypeIndex,
'media_title': TitleIndex,
'media_startswith': StartsWithIndex,
'media_children': MediaChildrenIndex,
}
def __init__(self):
addApiView('media.refresh', self.refresh, docs = {
'desc': 'Refresh a any media type by ID',
'params': {
'id': {'desc': 'Movie, Show, Season or Episode ID(s) you want to refresh.', 'type': 'int (comma separated)'},
}
})
addApiView('media.list', self.listView, docs = {
'desc': 'List media',
'params': {
'type': {'type': 'string', 'desc': 'Media type to filter on.'},
'status': {'type': 'array or csv', 'desc': 'Filter movie by status. Example:"active,done"'},
'release_status': {'type': 'array or csv', 'desc': 'Filter movie by status of its releases. Example:"snatched,available"'},
'limit_offset': {'desc': 'Limit and offset the movie list. Examples: "50" or "50,30"'},
'starts_with': {'desc': 'Starts with these characters. Example: "a" returns all movies starting with the letter "a"'},
'search': {'desc': 'Search movie title'},
},
'return': {'type': 'object', 'example': """{
'success': True,
'empty': bool, any movies returned or not,
'media': array, media found,
}"""}
})
addApiView('media.get', self.getView, docs = {
'desc': 'Get media by id',
'params': {
'id': {'desc': 'The id of the media'},
}
})
addApiView('media.delete', self.deleteView, docs = {
'desc': 'Delete a media from the wanted list',
'params': {
'id': {'desc': 'Media ID(s) you want to delete.', 'type': 'int (comma separated)'},
'delete_from': {'desc': 'Delete media from this page', 'type': 'string: all (default), wanted, manage'},
}
})
addApiView('media.available_chars', self.charView)
addEvent('app.load', self.addSingleRefreshView, priority = 100)
addEvent('app.load', self.addSingleListView, priority = 100)
addEvent('app.load', self.addSingleCharView, priority = 100)
addEvent('app.load', self.addSingleDeleteView, priority = 100)
addEvent('media.get', self.get)
addEvent('media.with_status', self.withStatus)
addEvent('media.with_identifiers', self.withIdentifiers)
addEvent('media.list', self.list)
addEvent('media.delete', self.delete)
addEvent('media.restatus', self.restatus)
addEvent('media.tag', self.tag)
addEvent('media.untag', self.unTag)
def refresh(self, id = '', **kwargs):
handlers = []
ids = splitString(id)
for x in ids:
refresh_handler = self.createRefreshHandler(x)
if refresh_handler:
handlers.append(refresh_handler)
fireEvent('notify.frontend', type = 'media.busy', data = {'_id': ids})
fireEventAsync('schedule.queue', handlers = handlers)
return {
'success': True,
}
def createRefreshHandler(self, media_id):
try:
media = get_db().get('id', media_id)
event = '%s.update_info' % media.get('type')
def handler():
fireEvent(event, media_id = media_id, on_complete = self.createOnComplete(media_id))
return handler
except:
log.error('Refresh handler for non existing media: %s', traceback.format_exc())
def addSingleRefreshView(self):
for media_type in fireEvent('media.types', merge = True):
addApiView('%s.refresh' % media_type, self.refresh)
def get(self, media_id):
try:
db = get_db()
imdb_id = getImdb(str(media_id))
if imdb_id:
media = db.get('media', 'imdb-%s' % imdb_id, with_doc = True)['doc']
else:
media = db.get('id', media_id)
if media:
# Attach category
try: media['category'] = db.get('id', media.get('category_id'))
except: pass
media['releases'] = fireEvent('release.for_media', media['_id'], single = True)
return media
except RecordNotFound:
log.error('Media with id "%s" not found', media_id)
except:
raise
def getView(self, id = None, **kwargs):
media = self.get(id) if id else None
return {
'success': media is not None,
'media': media,
}
def withStatus(self, status, with_doc = True):
db = get_db()
status = list(status if isinstance(status, (list, tuple)) else [status])
for s in status:
for ms in db.get_many('media_status', s):
if with_doc:
try:
doc = db.get('id', ms['_id'])
yield doc
except RecordNotFound:
log.debug('Record not found, skipping: %s', ms['_id'])
else:
yield ms
def withIdentifiers(self, identifiers, with_doc = False):
db = get_db()
for x in identifiers:
try:
media = db.get('media', '%s-%s' % (x, identifiers[x]), with_doc = with_doc)
return media
except:
pass
log.debug('No media found with identifiers: %s', identifiers)
def list(self, types = None, status = None, release_status = None, status_or = False, limit_offset = None, with_tags = None, starts_with = None, search = None):
db = get_db()
# Make a list from string
if status and not isinstance(status, (list, tuple)):
status = [status]
if release_status and not isinstance(release_status, (list, tuple)):
release_status = [release_status]
if types and not isinstance(types, (list, tuple)):
types = [types]
if with_tags and not isinstance(with_tags, (list, tuple)):
with_tags = [with_tags]
# query media ids
if types:
all_media_ids = set()
for media_type in types:
all_media_ids = all_media_ids.union(set([x['_id'] for x in db.get_many('media_by_type', media_type)]))
else:
all_media_ids = set([x['_id'] for x in db.all('media')])
media_ids = list(all_media_ids)
filter_by = {}
# Filter on movie status
if status and len(status) > 0:
filter_by['media_status'] = set()
for media_status in fireEvent('media.with_status', status, with_doc = False, single = True):
filter_by['media_status'].add(media_status.get('_id'))
# Filter on release status
if release_status and len(release_status) > 0:
filter_by['release_status'] = set()
for release_status in fireEvent('release.with_status', release_status, with_doc = False, single = True):
filter_by['release_status'].add(release_status.get('media_id'))
# Add search filters
if starts_with:
starts_with = toUnicode(starts_with.lower())[0]
starts_with = starts_with if starts_with in ascii_lowercase else '#'
filter_by['starts_with'] = [x['_id'] for x in db.get_many('media_startswith', starts_with)]
# Add tag filter
if with_tags:
filter_by['with_tags'] = set()
for tag in with_tags:
for x in db.get_many('media_tag', tag):
filter_by['with_tags'].add(x['_id'])
# Filter with search query
if search:
filter_by['search'] = [x['_id'] for x in db.get_many('media_search_title', search)]
if status_or and 'media_status' in filter_by and 'release_status' in filter_by:
filter_by['status'] = list(filter_by['media_status']) + list(filter_by['release_status'])
del filter_by['media_status']
del filter_by['release_status']
# Filter by combining ids
for x in filter_by:
media_ids = [n for n in media_ids if n in filter_by[x]]
total_count = len(media_ids)
if total_count == 0:
return 0, []
offset = 0
limit = -1
if limit_offset:
splt = splitString(limit_offset) if isinstance(limit_offset, (str, unicode)) else limit_offset
limit = tryInt(splt[0])
offset = tryInt(0 if len(splt) is 1 else splt[1])
# List movies based on title order
medias = []
for m in db.all('media_title'):
media_id = m['_id']
if media_id not in media_ids: continue
if offset > 0:
offset -= 1
continue
media = fireEvent('media.get', media_id, single = True)
# Merge releases with movie dict
medias.append(media)
# remove from media ids
media_ids.remove(media_id)
if len(media_ids) == 0 or len(medias) == limit: break
return total_count, medias
def listView(self, **kwargs):
total_movies, movies = self.list(
types = splitString(kwargs.get('type')),
status = splitString(kwargs.get('status')),
release_status = splitString(kwargs.get('release_status')),
status_or = kwargs.get('status_or') is not None,
limit_offset = kwargs.get('limit_offset'),
with_tags = splitString(kwargs.get('with_tags')),
starts_with = kwargs.get('starts_with'),
search = kwargs.get('search')
)
return {
'success': True,
'empty': len(movies) == 0,
'total': total_movies,
'movies': movies,
}
def addSingleListView(self):
for media_type in fireEvent('media.types', merge = True):
def tempList(*args, **kwargs):
return self.listView(types = media_type, **kwargs)
addApiView('%s.list' % media_type, tempList)
def availableChars(self, types = None, status = None, release_status = None):
db = get_db()
# Make a list from string
if status and not isinstance(status, (list, tuple)):
status = [status]
if release_status and not isinstance(release_status, (list, tuple)):
release_status = [release_status]
if types and not isinstance(types, (list, tuple)):
types = [types]
# query media ids
if types:
all_media_ids = set()
for media_type in types:
all_media_ids = all_media_ids.union(set([x['_id'] for x in db.get_many('media_by_type', media_type)]))
else:
all_media_ids = set([x['_id'] for x in db.all('media')])
media_ids = all_media_ids
filter_by = {}
# Filter on movie status
if status and len(status) > 0:
filter_by['media_status'] = set()
for media_status in fireEvent('media.with_status', status, with_doc = False, single = True):
filter_by['media_status'].add(media_status.get('_id'))
# Filter on release status
if release_status and len(release_status) > 0:
filter_by['release_status'] = set()
for release_status in fireEvent('release.with_status', release_status, with_doc = False, single = True):
filter_by['release_status'].add(release_status.get('media_id'))
# Filter by combining ids
for x in filter_by:
media_ids = [n for n in media_ids if n in filter_by[x]]
chars = set()
for x in db.all('media_startswith'):
if x['_id'] in media_ids:
chars.add(x['key'])
if len(chars) == 25:
break
return list(chars)
def charView(self, **kwargs):
type = splitString(kwargs.get('type', 'movie'))
status = splitString(kwargs.get('status', None))
release_status = splitString(kwargs.get('release_status', None))
chars = self.availableChars(type, status, release_status)
return {
'success': True,
'empty': len(chars) == 0,
'chars': chars,
}
def addSingleCharView(self):
for media_type in fireEvent('media.types', merge = True):
def tempChar(*args, **kwargs):
return self.charView(types = media_type, **kwargs)
addApiView('%s.available_chars' % media_type, tempChar)
def delete(self, media_id, delete_from = None):
try:
db = get_db()
media = db.get('id', media_id)
if media:
deleted = False
media_releases = fireEvent('release.for_media', media['_id'], single = True)
if delete_from == 'all':
# Delete connected releases
for release in media_releases:
db.delete(release)
db.delete(media)
deleted = True
else:
total_releases = len(media_releases)
total_deleted = 0
new_media_status = None
for release in media_releases:
if delete_from in ['wanted', 'snatched', 'late']:
if release.get('status') != 'done':
db.delete(release)
total_deleted += 1
new_media_status = 'done'
elif delete_from == 'manage':
if release.get('status') == 'done' or media.get('status') == 'done':
db.delete(release)
total_deleted += 1
if (total_releases == total_deleted and media['status'] != 'active') or (total_releases == 0 and not new_media_status) or (not new_media_status and delete_from == 'late'):
db.delete(media)
deleted = True
elif new_media_status:
media['status'] = new_media_status
db.update(media)
fireEvent('media.untag', media['_id'], 'recent', single = True)
else:
fireEvent('media.restatus', media.get('_id'), single = True)
if deleted:
fireEvent('notify.frontend', type = 'media.deleted', data = media)
except:
log.error('Failed deleting media: %s', traceback.format_exc())
return True
def deleteView(self, id = '', **kwargs):
ids = splitString(id)
for media_id in ids:
self.delete(media_id, delete_from = kwargs.get('delete_from', 'all'))
return {
'success': True,
}
def addSingleDeleteView(self):
for media_type in fireEvent('media.types', merge = True):
def tempDelete(*args, **kwargs):
return self.deleteView(types = media_type, *args, **kwargs)
addApiView('%s.delete' % media_type, tempDelete)
def restatus(self, media_id):
try:
db = get_db()
m = db.get('id', media_id)
previous_status = m['status']
log.debug('Changing status for %s', getTitle(m))
if not m['profile_id']:
m['status'] = 'done'
else:
m['status'] = 'active'
try:
profile = db.get('id', m['profile_id'])
media_releases = fireEvent('release.for_media', m['_id'], single = True)
done_releases = [release for release in media_releases if release.get('status') == 'done']
if done_releases:
# Only look at latest added release
release = sorted(done_releases, key = itemgetter('last_edit'), reverse = True)[0]
# Check if we are finished with the media
if fireEvent('quality.isfinish', {'identifier': release['quality'], 'is_3d': release.get('is_3d', False)}, profile, timedelta(seconds = time.time() - release['last_edit']).days, single = True):
m['status'] = 'done'
elif previous_status == 'done':
m['status'] = 'done'
except RecordNotFound:
log.debug('Failed restatus, keeping previous: %s', traceback.format_exc())
m['status'] = previous_status
# Only update when status has changed
if previous_status != m['status']:
db.update(m)
# Tag media as recent
self.tag(media_id, 'recent')
return m['status']
except:
log.error('Failed restatus: %s', traceback.format_exc())
def tag(self, media_id, tag):
try:
db = get_db()
m = db.get('id', media_id)
tags = m.get('tags') or []
if tag not in tags:
tags.append(tag)
m['tags'] = tags
db.update(m)
return True
except:
log.error('Failed tagging: %s', traceback.format_exc())
return False
def unTag(self, media_id, tag):
try:
db = get_db()
m = db.get('id', media_id)
tags = m.get('tags') or []
if tag in tags:
new_tags = list(set(tags))
new_tags.remove(tag)
m['tags'] = new_tags
db.update(m)
return True
except:
log.error('Failed untagging: %s', traceback.format_exc())
return False

View File

@@ -0,0 +1,8 @@
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import Provider
log = CPLog(__name__)
class AutomationBase(Provider):
pass

View File

@@ -0,0 +1,345 @@
from urlparse import urlparse
import json
import re
import time
import traceback
import xml.etree.ElementTree as XMLTree
from couchpotato.core.event import addEvent, fireEvent
from couchpotato.core.helpers.encoding import ss
from couchpotato.core.helpers.variable import tryFloat, mergeDicts, md5, \
possibleTitles
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
from couchpotato.environment import Env
log = CPLog(__name__)
class MultiProvider(Plugin):
def __init__(self):
self._classes = []
for Type in self.getTypes():
klass = Type()
# Overwrite name so logger knows what we're talking about
klass.setName('%s:%s' % (self.getName(), klass.getName()))
self._classes.append(klass)
def getTypes(self):
return []
def getClasses(self):
return self._classes
class Provider(Plugin):
type = None # movie, show, subtitle, trailer, ...
http_time_between_calls = 10 # Default timeout for url requests
last_available_check = {}
is_available = {}
def isAvailable(self, test_url):
if Env.get('dev'): return True
now = time.time()
host = urlparse(test_url).hostname
if self.last_available_check.get(host) < now - 900:
self.last_available_check[host] = now
try:
self.urlopen(test_url, 30)
self.is_available[host] = True
except:
log.error('"%s" unavailable, trying again in an 15 minutes.', host)
self.is_available[host] = False
return self.is_available.get(host, False)
def getJsonData(self, url, decode_from = None, **kwargs):
cache_key = md5(url)
data = self.getCache(cache_key, url, **kwargs)
if data:
try:
data = data.strip()
if decode_from:
data = data.decode(decode_from)
return json.loads(data)
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return []
def getRSSData(self, url, item_path = 'channel/item', **kwargs):
cache_key = md5(url)
data = self.getCache(cache_key, url, **kwargs)
if data and len(data) > 0:
try:
data = XMLTree.fromstring(data)
return self.getElements(data, item_path)
except:
try:
data = XMLTree.fromstring(ss(data))
return self.getElements(data, item_path)
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
return []
def getHTMLData(self, url, **kwargs):
cache_key = md5(url)
return self.getCache(cache_key, url, **kwargs)
class YarrProvider(Provider):
protocol = None # nzb, torrent, torrent_magnet
cat_ids = {}
cat_backup_id = None
size_gb = ['gb', 'gib']
size_mb = ['mb', 'mib']
size_kb = ['kb', 'kib']
last_login_check = None
def __init__(self):
addEvent('provider.enabled_protocols', self.getEnabledProtocol)
addEvent('provider.belongs_to', self.belongsTo)
addEvent('provider.search.%s.%s' % (self.protocol, self.type), self.search)
def getEnabledProtocol(self):
if self.isEnabled():
return self.protocol
else:
return []
def buildUrl(self, *args, **kwargs):
pass
def login(self):
# Check if we are still logged in every hour
now = time.time()
if self.last_login_check and self.last_login_check < (now - 3600):
try:
output = self.urlopen(self.urls['login_check'])
if self.loginCheckSuccess(output):
self.last_login_check = now
return True
except: pass
self.last_login_check = None
if self.last_login_check:
return True
try:
output = self.urlopen(self.urls['login'], data = self.getLoginParams())
if self.loginSuccess(output):
self.last_login_check = now
return True
error = 'unknown'
except:
error = traceback.format_exc()
self.last_login_check = None
log.error('Failed to login %s: %s', (self.getName(), error))
return False
def loginSuccess(self, output):
return True
def loginCheckSuccess(self, output):
return True
def loginDownload(self, url = '', nzb_id = ''):
try:
if not self.login():
log.error('Failed downloading from %s', self.getName())
return self.urlopen(url)
except:
log.error('Failed downloading from %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return {}
def download(self, url = '', nzb_id = ''):
try:
return self.urlopen(url, headers = {'User-Agent': Env.getIdentifier()}, show_error = False)
except:
log.error('Failed getting release from %s: %s', (self.getName(), traceback.format_exc()))
return 'try_next'
def search(self, media, quality):
if self.isDisabled():
return []
# Login if needed
if self.urls.get('login') and not self.login():
log.error('Failed to login to: %s', self.getName())
return []
# Create result container
imdb_results = hasattr(self, '_search')
results = ResultList(self, media, quality, imdb_results = imdb_results)
# Do search based on imdb id
if imdb_results:
self._search(media, quality, results)
# Search possible titles
else:
media_title = fireEvent('library.query', media, include_year = False, single = True)
for title in possibleTitles(media_title):
self._searchOnTitle(title, media, quality, results)
return results
def belongsTo(self, url, provider = None, host = None):
try:
if provider and provider == self.getName():
return self
hostname = urlparse(url).hostname
if host and hostname in host:
return self
else:
for url_type in self.urls:
download_url = self.urls[url_type]
if hostname in download_url:
return self
except:
log.debug('Url %s doesn\'t belong to %s', (url, self.getName()))
return
def parseSize(self, size):
size_raw = size.lower()
size = tryFloat(re.sub(r'[^0-9.]', '', size).strip())
for s in self.size_gb:
if s in size_raw:
return size * 1024
for s in self.size_mb:
if s in size_raw:
return size
for s in self.size_kb:
if s in size_raw:
return size / 1024
return 0
def getCatId(self, quality = None):
if not quality: quality = {}
identifier = quality.get('identifier')
want_3d = False
if quality.get('custom'):
want_3d = quality['custom'].get('3d')
for ids, qualities in self.cat_ids:
if identifier in qualities or (want_3d and '3d' in qualities):
return ids
if self.cat_backup_id:
return [self.cat_backup_id]
return []
class ResultList(list):
result_ids = None
provider = None
media = None
quality = None
def __init__(self, provider, media, quality, **kwargs):
self.result_ids = []
self.provider = provider
self.media = media
self.quality = quality
self.kwargs = kwargs
super(ResultList, self).__init__()
def extend(self, results):
for r in results:
self.append(r)
def append(self, result):
new_result = self.fillResult(result)
is_correct = fireEvent('searcher.correct_release', new_result, self.media, self.quality,
imdb_results = self.kwargs.get('imdb_results', False), single = True)
if is_correct and new_result['id'] not in self.result_ids:
is_correct_weight = float(is_correct)
new_result['score'] += fireEvent('score.calculate', new_result, self.media, single = True)
old_score = new_result['score']
new_result['score'] = int(old_score * is_correct_weight)
log.info2('Found correct release with weight %.02f, old_score(%d) now scaled to score(%d)', (
is_correct_weight,
old_score,
new_result['score']
))
self.found(new_result)
self.result_ids.append(result['id'])
super(ResultList, self).append(new_result)
def fillResult(self, result):
defaults = {
'id': 0,
'protocol': self.provider.protocol,
'type': self.provider.type,
'provider': self.provider.getName(),
'download': self.provider.loginDownload if self.provider.urls.get('login') else self.provider.download,
'seed_ratio': Env.setting('seed_ratio', section = self.provider.getName().lower(), default = ''),
'seed_time': Env.setting('seed_time', section = self.provider.getName().lower(), default = ''),
'url': '',
'name': '',
'age': 0,
'size': 0,
'description': '',
'score': 0
}
return mergeDicts(defaults, result)
def found(self, new_result):
if not new_result.get('provider_extra'):
new_result['provider_extra'] = ''
else:
new_result['provider_extra'] = ', %s' % new_result['provider_extra']
log.info('Found: score(%(score)s) on %(provider)s%(provider_extra)s: %(name)s', new_result)

View File

@@ -0,0 +1,5 @@
from couchpotato.core.media._base.providers.base import Provider
class BaseInfoProvider(Provider):
type = 'unknown'

View File

@@ -0,0 +1,8 @@
from couchpotato.core.logger import CPLog
from couchpotato.core.plugins.base import Plugin
log = CPLog(__name__)
class MetaDataBase(Plugin):
pass

View File

@@ -1,15 +1,14 @@
config = {
config = [{
'name': 'nzb_providers',
'groups': [
{
'label': 'Usenet',
'label': 'Usenet Providers',
'description': 'Providers searching usenet for new releases',
'wizard': True,
'type': 'list',
'name': 'nzb_providers',
'tab': 'searcher',
'subtab': 'providers',
'options': [],
},
],
}
}]

View File

@@ -1,9 +1,11 @@
from couchpotato.core.providers.base import YarrProvider
import time
from couchpotato.core.media._base.providers.base import YarrProvider
class NZBProvider(YarrProvider):
type = 'nzb'
protocol = 'nzb'
def calculateAge(self, unix):
return int(time.time() - unix) / 24 / 60 / 60

View File

@@ -0,0 +1,120 @@
import re
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.variable import tryInt, simplifyString
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.nzb.base import NZBProvider
log = CPLog(__name__)
class Base(NZBProvider):
urls = {
'download': 'https://www.binsearch.info/fcgi/nzb.fcgi?q=%s',
'detail': 'https://www.binsearch.info%s',
'search': 'https://www.binsearch.info/index.php?%s',
}
http_time_between_calls = 4 # Seconds
def _search(self, media, quality, results):
data = self.getHTMLData(self.urls['search'] % self.buildUrl(media, quality))
if data:
try:
html = BeautifulSoup(data)
main_table = html.find('table', attrs = {'id': 'r2'})
if not main_table:
return
items = main_table.find_all('tr')
for row in items:
title = row.find('span', attrs = {'class': 's'})
if not title: continue
nzb_id = row.find('input', attrs = {'type': 'checkbox'})['name']
info = row.find('span', attrs = {'class':'d'})
size_match = re.search('size:.(?P<size>[0-9\.]+.[GMB]+)', info.text)
age = 0
try: age = re.search('(?P<size>\d+d)', row.find_all('td')[-1:][0].text).group('size')[:-1]
except: pass
def extra_check(item):
parts = re.search('available:.(?P<parts>\d+)./.(?P<total>\d+)', info.text)
total = float(tryInt(parts.group('total')))
parts = float(tryInt(parts.group('parts')))
if (total / parts) < 1 and ((total / parts) < 0.95 or ((total / parts) >= 0.95 and not ('par2' in info.text.lower() or 'pa3' in info.text.lower()))):
log.info2('Wrong: \'%s\', not complete: %s out of %s', (item['name'], parts, total))
return False
if 'requires password' in info.text.lower():
log.info2('Wrong: \'%s\', passworded', (item['name']))
return False
return True
results.append({
'id': nzb_id,
'name': simplifyString(title.text),
'age': tryInt(age),
'size': self.parseSize(size_match.group('size')),
'url': self.urls['download'] % nzb_id,
'detail_url': self.urls['detail'] % info.find('a')['href'],
'extra_check': extra_check
})
except:
log.error('Failed to parse HTML response from BinSearch: %s', traceback.format_exc())
def download(self, url = '', nzb_id = ''):
data = {
'action': 'nzb',
nzb_id: 'on'
}
try:
return self.urlopen(url, data = data, show_error = False)
except:
log.error('Failed getting nzb from %s: %s', (self.getName(), traceback.format_exc()))
return 'try_next'
config = [{
'name': 'binsearch',
'groups': [
{
'tab': 'searcher',
'list': 'nzb_providers',
'name': 'binsearch',
'description': 'Free provider, less accurate. See <a href="https://www.binsearch.info/">BinSearch</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAAAAAA6mKC9AAAATklEQVQY02NwQAMMWAXOnz+PKvD//3/CAvM//z+fgiwAAs+RBab4PP//vwbFjPlAffgEChzOo2r5fBuIfRAC5w8D+QUofkkp8MHjOWQAAM3Sbogztg2wAAAAAElFTkSuQmCC',
'options': [
{
'name': 'enabled',
'type': 'enabler',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,266 @@
from urlparse import urlparse
import time
import traceback
import re
from couchpotato.core.helpers.encoding import tryUrlencode, toUnicode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import cleanHost, splitString, tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import ResultList
from couchpotato.core.media._base.providers.nzb.base import NZBProvider
from couchpotato.environment import Env
from dateutil.parser import parse
from requests import HTTPError
log = CPLog(__name__)
class Base(NZBProvider, RSS):
urls = {
'detail': 'details/%s',
'download': 't=get&id=%s'
}
passwords_regex = 'password|wachtwoord'
limits_reached = {}
http_time_between_calls = 1 # Seconds
def search(self, media, quality):
hosts = self.getHosts()
results = ResultList(self, media, quality, imdb_results = True)
for host in hosts:
if self.isDisabled(host):
continue
self._searchOnHost(host, media, quality, results)
return results
def _searchOnHost(self, host, media, quality, results):
query = self.buildUrl(media, host)
url = '%s&%s' % (self.getUrl(host['host']), query)
nzbs = self.getRSSData(url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})
for nzb in nzbs:
date = None
spotter = None
for item in nzb:
if date and spotter:
break
if item.attrib.get('name') == 'usenetdate':
date = item.attrib.get('value')
break
# Get the name of the person who posts the spot
if item.attrib.get('name') == 'poster':
if "@spot.net" in item.attrib.get('value'):
spotter = item.attrib.get('value').split("@")[0]
continue
if not date:
date = self.getTextElement(nzb, 'pubDate')
nzb_id = self.getTextElement(nzb, 'guid').split('/')[-1:].pop()
name = self.getTextElement(nzb, 'title')
if not name:
continue
name_extra = ''
if spotter:
name_extra = spotter
description = ''
if "@spot.net" in nzb_id:
try:
# Get details for extended description to retrieve passwords
query = self.buildDetailsUrl(nzb_id, host['api_key'])
url = '%s&%s' % (self.getUrl(host['host']), query)
nzb_details = self.getRSSData(url, cache_timeout = 1800, headers = {'User-Agent': Env.getIdentifier()})[0]
description = self.getTextElement(nzb_details, 'description')
# Extract a password from the description
password = re.search('(?:' + self.passwords_regex + ')(?: *)(?:\:|\=)(?: *)(.*?)\<br\>|\n|$', description, flags = re.I).group(1)
if password:
name += ' {{%s}}' % password.strip()
except:
log.debug('Error getting details of "%s": %s', (name, traceback.format_exc()))
results.append({
'id': nzb_id,
'provider_extra': urlparse(host['host']).hostname or host['host'],
'name': toUnicode(name),
'name_extra': name_extra,
'age': self.calculateAge(int(time.mktime(parse(date).timetuple()))),
'size': int(self.getElement(nzb, 'enclosure').attrib['length']) / 1024 / 1024,
'url': ((self.getUrl(host['host']) + self.urls['download']) % tryUrlencode(nzb_id)) + self.getApiExt(host),
'detail_url': (cleanHost(host['host']) + self.urls['detail']) % tryUrlencode(nzb_id),
'content': self.getTextElement(nzb, 'description'),
'description': description,
'score': host['extra_score'],
})
def getHosts(self):
uses = splitString(str(self.conf('use')), clean = False)
hosts = splitString(self.conf('host'), clean = False)
api_keys = splitString(self.conf('api_key'), clean = False)
extra_score = splitString(self.conf('extra_score'), clean = False)
custom_tags = splitString(self.conf('custom_tag'), clean = False)
list = []
for nr in range(len(hosts)):
try: key = api_keys[nr]
except: key = ''
try: host = hosts[nr]
except: host = ''
try: score = tryInt(extra_score[nr])
except: score = 0
try: custom_tag = custom_tags[nr]
except: custom_tag = ''
list.append({
'use': uses[nr],
'host': host,
'api_key': key,
'extra_score': score,
'custom_tag': custom_tag
})
return list
def belongsTo(self, url, provider = None, host = None):
hosts = self.getHosts()
for host in hosts:
result = super(Base, self).belongsTo(url, host = host['host'], provider = provider)
if result:
return result
def getUrl(self, host):
if '?page=newznabapi' in host:
return cleanHost(host)[:-1] + '&'
return cleanHost(host) + 'api?'
def isDisabled(self, host = None):
return not self.isEnabled(host)
def isEnabled(self, host = None):
# Return true if at least one is enabled and no host is given
if host is None:
for host in self.getHosts():
if self.isEnabled(host):
return True
return False
return NZBProvider.isEnabled(self) and host['host'] and host['api_key'] and int(host['use'])
def getApiExt(self, host):
return '&apikey=%s' % host['api_key']
def download(self, url = '', nzb_id = ''):
host = urlparse(url).hostname
if self.limits_reached.get(host):
# Try again in 3 hours
if self.limits_reached[host] > time.time() - 10800:
return 'try_next'
try:
data = self.urlopen(url, show_error = False)
self.limits_reached[host] = False
return data
except HTTPError as e:
if e.code == 503:
response = e.read().lower()
if 'maximum api' in response or 'download limit' in response:
if not self.limits_reached.get(host):
log.error('Limit reached for newznab provider: %s', host)
self.limits_reached[host] = time.time()
return 'try_next'
log.error('Failed download from %s: %s', (host, traceback.format_exc()))
return 'try_next'
def buildDetailsUrl(self, nzb_id, api_key):
query = tryUrlencode({
't': 'details',
'id': nzb_id,
'apikey': api_key,
})
return query
config = [{
'name': 'newznab',
'groups': [
{
'tab': 'searcher',
'list': 'nzb_providers',
'name': 'newznab',
'order': 10,
'description': 'Enable <a href="http://newznab.com/" target="_blank">NewzNab</a> such as <a href="https://nzb.su" target="_blank">NZB.su</a>, \
<a href="https://nzbs.org" target="_blank">NZBs.org</a>, <a href="http://dognzb.cr/" target="_blank">DOGnzb.cr</a>, \
<a href="https://github.com/spotweb/spotweb" target="_blank">Spotweb</a>, <a href="https://nzbgeek.info/" target="_blank">NZBGeek</a>, \
<a href="https://smackdownonyou.com" target="_blank">SmackDown</a>, <a href="https://www.nzbfinder.ws" target="_blank">NZBFinder</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQAgMAAABinRfyAAAACVBMVEVjhwD///86aRovd/sBAAAAMklEQVQI12NgAIPQUCCRmQkjssDEShiRuRIqwZqZGcDAGBrqANUhGgIkWAOABKMDxCAA24UK50b26SAAAAAASUVORK5CYII=',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': True,
},
{
'name': 'use',
'default': '0,0,0,0,0,0'
},
{
'name': 'host',
'default': 'api.nzb.su,api.dognzb.cr,nzbs.org,https://index.nzbgeek.info, https://smackdownonyou.com, https://www.nzbfinder.ws',
'description': 'The hostname of your newznab provider',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'default': '0,0,0,0,0,0',
'description': 'Starting score for each release found via this provider.',
},
{
'name': 'custom_tag',
'advanced': True,
'label': 'Custom tag',
'default': ',,,,,',
'description': 'Add custom tags, for example add rls=1 to get only scene releases from nzbs.org',
},
{
'name': 'api_key',
'default': ',,,,,',
'label': 'Api Key',
'description': 'Can be found on your profile page',
'type': 'combined',
'combine': ['use', 'host', 'api_key', 'extra_score', 'custom_tag'],
},
],
},
],
}]

View File

@@ -1,37 +1,28 @@
import time
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.providers.nzb.base import NZBProvider
from couchpotato.core.media._base.providers.nzb.base import NZBProvider
from dateutil.parser import parse
import time
log = CPLog(__name__)
class NZBClub(NZBProvider, RSS):
class Base(NZBProvider, RSS):
urls = {
'search': 'http://www.nzbclub.com/nzbfeed.aspx?%s',
'search': 'https://www.nzbclub.com/nzbfeeds.aspx?%s',
}
http_time_between_calls = 4 #seconds
http_time_between_calls = 4 # seconds
def _searchOnTitle(self, title, movie, quality, results):
def _search(self, media, quality, results):
q = '"%s %s"' % (title, movie['library']['year'])
params = tryUrlencode({
'q': q,
'ig': 1,
'rpp': 200,
'st': 5,
'sp': 1,
'ns': 1,
})
nzbs = self.getRSSData(self.urls['search'] % params)
nzbs = self.getRSSData(self.urls['search'] % self.buildUrl(media))
for nzb in nzbs:
@@ -64,7 +55,7 @@ class NZBClub(NZBProvider, RSS):
def getMoreInfo(self, item):
full_description = self.getCache('nzbclub.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)
html = BeautifulSoup(full_description)
nfo_pre = html.find('pre', attrs = {'class':'nfo'})
nfo_pre = html.find('pre', attrs = {'class': 'nfo'})
description = toUnicode(nfo_pre.text) if nfo_pre else ''
item['description'] = description
@@ -78,3 +69,32 @@ class NZBClub(NZBProvider, RSS):
return False
return True
config = [{
'name': 'nzbclub',
'groups': [
{
'tab': 'searcher',
'list': 'nzb_providers',
'name': 'NZBClub',
'description': 'Free provider, less accurate. See <a href="https://www.nzbclub.com/">NZBClub</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACEUlEQVQ4y3VSMWgUQRR9/8/s7OzeJSdnTsVGghLEYBNQjBpQiRBFhIB2EcHG1kbs0murhZAmVocExEZQ0c7CxkLINYcJJpoYj9wZcnu72fF21uJSXMzuhyne58/j/fcf4b+KokgBIOSU53lxP5b9oNVqDT36dH+5UjoiKvIwPFEEgWBshGZ3E7/NOupL9fMjx0e+ZhKsrq+c/FPZKJi0w4FsQXMBDEJsd7BNW9h2tuyP9vfTALIJkMIu1hYRtINM+dpzcWc0sbkreK4fUEogyraAmKGF3+7vcT/wtR9QwkCabSAzQQuvk0uglAo5YaQ5DASGYjfMXcHVOqKu6NmR7iehlKAdHWUqWPv1c3i+9uwVdRlEBGaGEAJCCrDo9ShhvF6qPq8tL57bp+DbRn2sHtUuCY9YphLMu5921VhrwYJ5tbt0tt6sjQP4vEfB2Ikz7/ytwbeR6ljHkXCUA6UcOLtPOg4MYhtH8ZcLw5er+xQMDAwEURRNl96X596Y6oxFwsw9fmtTOAr2Ik19nL365FZpsLSdnQPPM8aYewc+lDcX4rkHqbQMAGTJXulOLzycmr1bKBTi3DOGYagajcahiaOT89fbM0/dxEsUu3aidfPljWO3HzebzYNBELi5Z5RSJlrrHd/3w8lT114MrVTWOn875fHRiYVisRhorWMpZXdvNnLKGCOstb0AMlulVJI19w/+nceU4D0aCwAAAABJRU5ErkJggg==',
'options': [
{
'name': 'enabled',
'type': 'enabler',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,126 @@
import re
import time
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.event import fireEvent
from couchpotato.core.media._base.providers.nzb.base import NZBProvider
from dateutil.parser import parse
log = CPLog(__name__)
class Base(NZBProvider, RSS):
urls = {
'download': 'https://www.nzbindex.com/download/',
'search': 'https://www.nzbindex.com/rss/?%s',
}
http_time_between_calls = 1 # Seconds
def _search(self, media, quality, results):
nzbs = self.getRSSData(self.urls['search'] % self.buildUrl(media, quality))
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
nzbindex_id = int(self.getTextElement(nzb, "link").split('/')[4])
title = self.getTextElement(nzb, "title")
match = fireEvent('matcher.parse', title, parser='usenet', single = True)
if not match.chains:
log.info('Unable to parse release with title "%s"', title)
continue
# TODO should we consider other lower-weight chains here?
info = fireEvent('matcher.flatten_info', match.chains[0].info, single = True)
release_name = fireEvent('matcher.construct_from_raw', info.get('release_name'), single = True)
file_name = info.get('detail', {}).get('file_name')
file_name = file_name[0] if file_name else None
title = release_name or file_name
# Strip extension from parsed title (if one exists)
ext_pos = title.rfind('.')
# Assume extension if smaller than 4 characters
# TODO this should probably be done a better way
if len(title[ext_pos + 1:]) <= 4:
title = title[:ext_pos]
if not title:
log.info('Unable to find release name from match')
continue
try:
description = self.getTextElement(nzb, "description")
except:
description = ''
def extra_check(item):
if '#c20000' in item['description'].lower():
log.info('Wrong: Seems to be passworded: %s', item['name'])
return False
return True
results.append({
'id': nzbindex_id,
'name': title,
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, "pubDate")).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': enclosure['url'].replace('/download/', '/release/'),
'description': description,
'get_more_info': self.getMoreInfo,
'extra_check': extra_check,
})
def getMoreInfo(self, item):
try:
if '/nfo/' in item['description'].lower():
nfo_url = re.search('href=\"(?P<nfo>.+)\" ', item['description']).group('nfo')
full_description = self.getCache('nzbindex.%s' % item['id'], url = nfo_url, cache_timeout = 25920000)
html = BeautifulSoup(full_description)
item['description'] = toUnicode(html.find('pre', attrs = {'id': 'nfo0'}).text)
except:
pass
config = [{
'name': 'nzbindex',
'groups': [
{
'tab': 'searcher',
'list': 'nzb_providers',
'name': 'nzbindex',
'description': 'Free provider, less accurate. See <a href="https://www.nzbindex.com/">NZBIndex</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAo0lEQVR42t2SQQ2AMBAEcUCwUAv94QMLfHliAQtYqIVawEItYAG6yZFMLkUANNlk79Kbbtp2P1j9uKxVV9VWFeStl+Wh3fWK9hNwEoADZkJtMD49AqS5AUjWGx6A+m+ARICGrM5W+wSTB0gETKzdHZwCEZAJ8PGZQN4AiQAmkR9s06EBAugJiBoAAPFfAQcBgZcIHzwA6TYP4JsXeSg3P9L31w3eksbH3zMb/wAAAABJRU5ErkJggg==',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': True,
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,103 @@
from urlparse import urlparse, parse_qs
import time
from couchpotato.core.event import fireEvent
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.rss import RSS
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.nzb.base import NZBProvider
from dateutil.parser import parse
log = CPLog(__name__)
class Base(NZBProvider, RSS):
urls = {
'search': 'https://rss.omgwtfnzbs.org/rss-search.php?%s',
'detail_url': 'https://omgwtfnzbs.org/details.php?id=%s',
}
http_time_between_calls = 1 # Seconds
cat_ids = [
([15], ['dvdrip']),
([15, 16], ['brrip']),
([16], ['720p', '1080p', 'bd50']),
([17], ['dvdr']),
]
cat_backup_id = 'movie'
def search(self, movie, quality):
if quality['identifier'] in fireEvent('quality.pre_releases', single = True):
return []
return super(Base, self).search(movie, quality)
def _searchOnTitle(self, title, movie, quality, results):
q = '%s %s' % (title, movie['info']['year'])
params = tryUrlencode({
'search': q,
'catid': ','.join([str(x) for x in self.getCatId(quality)]),
'user': self.conf('username', default = ''),
'api': self.conf('api_key', default = ''),
})
nzbs = self.getRSSData(self.urls['search'] % params)
for nzb in nzbs:
enclosure = self.getElement(nzb, 'enclosure').attrib
nzb_id = parse_qs(urlparse(self.getTextElement(nzb, 'link')).query).get('id')[0]
results.append({
'id': nzb_id,
'name': toUnicode(self.getTextElement(nzb, 'title')),
'age': self.calculateAge(int(time.mktime(parse(self.getTextElement(nzb, 'pubDate')).timetuple()))),
'size': tryInt(enclosure['length']) / 1024 / 1024,
'url': enclosure['url'],
'detail_url': self.urls['detail_url'] % nzb_id,
'description': self.getTextElement(nzb, 'description')
})
config = [{
'name': 'omgwtfnzbs',
'groups': [
{
'tab': 'searcher',
'list': 'nzb_providers',
'name': 'OMGWTFNZBs',
'description': 'See <a href="http://omgwtfnzbs.org/">OMGWTFNZBs</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQEAIAAADAAbR1AAADbElEQVR4AZ2UW0ybZRiAy/OvdHaLYvB0YTRIFi7GkM44zRLmIfNixkWdiRMyYoxRE8/TC7MYvXCGEBmr3mxLwVMwY0wYA7e6Wso4lB6h/U9taSlMGIfBXLYlJMyo0S///2dJI5lxN8/F2/f9nu9737e/jYmXr6KTbN9BGG9HE/NotQ76UWziNzrXFiETk/5ARUNH+7+0kW7fSgTl0VKGOLZzidOkmuuIo7q2oTArNLPIzhdIkqXkerFOm2CaD/5bcKrjIL2c3fkhPxOq93Kcb91v46fV9TQKF4TgV/TbUsQtzfCaK6jMOd5DJrguSIIhexmqqVxN0FXbRR8/ND/LYTTj6J7nl2gnL47OkDW4KJhnQHCa6JpKVNJGA3OC58nwBJoZ//ebbIyKpBxjrr0o1q1FMRkrKXZnHWF85VvxMrJxibwhGyd0f5bLnKzqJs1k0Sfo+EU8hdAUvkbcwKEgs2D0OiV4jmmD1zb+Tp6er0JMMvDxPo5xev9zTBF683NS+N56n1YiB95B5crr93KRuKhKI0tb0Kw2mgLLqTjLEWO8424i9IvURaYeOckwf3+/yCC9e3bQQ/MuD+Monk0k+XFXMUfx7z5EEP+XlXi5tLlMxH8zLppw7idJrugcus30kC86gc7UrQqjLIukM8zWHOACeU+TiMxXN6ExVOkgz4lvPEzice1GIVhxhG4CrZvpl6TH55giKWqXGLy9hZh5aUtgDSew/msSyCKpl+DDNfxJc8NBIsxUxUnz14O/oONu+IIIvso9TLBQ1SY5rUhuSzUhAqJ2mRXBLDOCeUtgUZXsaObT8BffhUJPqWgiV+3zKKzYH0ClvTRLhD77HIqVkyh5jThnivehoG+qJctIRSPn6bxvO4FCgTl9c1DmbpjLajbQFE8aW5SU3rg+zOPGUjTUF9NFpLEbH2c/KmGYlY69/GQJVtGMSUcEp9eCbB1nctbxHTLRdTUkGDf+B02uGWRG3OvpJ/zSMwzif+oxVBID3cQKBavLCiPmB2PM2UuSCUPgrX4VDb97AwEG67bh4+KTOlncvu3M31BwA5rLHbCfEjwkNDky9e/SSbSxnD46Pg0RJtpXRvhmBSZHpRjWtKwFybjuQeXaKxto4WjLZZZvVmC17pZLJFkwxm5++PS2Mrwc7nyIMYZe/IzoP5d6QgEybqTXAAAAAElFTkSuQmCC',
'options': [
{
'name': 'enabled',
'type': 'enabler',
},
{
'name': 'username',
'default': '',
},
{
'name': 'api_key',
'label': 'Api Key',
'default': '',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'default': 20,
'type': 'int',
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -1,15 +1,14 @@
config = {
config = [{
'name': 'torrent_providers',
'groups': [
{
'label': 'Torrent',
'label': 'Torrent Providers',
'description': 'Providers searching torrent sites for new releases',
'wizard': True,
'type': 'list',
'name': 'torrent_providers',
'tab': 'searcher',
'subtab': 'providers',
'options': [],
},
],
}
}]

View File

@@ -0,0 +1,141 @@
import re
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.variable import tryInt, getIdentifier
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'https://awesome-hd.net/',
'detail': 'https://awesome-hd.net/torrents.php?torrentid=%s',
'search': 'https://awesome-hd.net/searchapi.php?action=imdbsearch&passkey=%s&imdb=%s&internal=%s',
'download': 'https://awesome-hd.net/torrents.php?action=download&id=%s&authkey=%s&torrent_pass=%s',
}
http_time_between_calls = 1
def _search(self, movie, quality, results):
data = self.getHTMLData(self.urls['search'] % (self.conf('passkey'), getIdentifier(movie), self.conf('only_internal')))
if data:
try:
soup = BeautifulSoup(data)
if soup.find('error'):
log.error(soup.find('error').get_text())
return
authkey = soup.find('authkey').get_text()
entries = soup.find_all('torrent')
for entry in entries:
torrentscore = 0
torrent_id = entry.find('id').get_text()
name = entry.find('name').get_text()
year = entry.find('year').get_text()
releasegroup = entry.find('releasegroup').get_text()
resolution = entry.find('resolution').get_text()
encoding = entry.find('encoding').get_text()
freeleech = entry.find('freeleech').get_text()
torrent_desc = '/ %s / %s / %s ' % (releasegroup, resolution, encoding)
if freeleech == '0.25' and self.conf('prefer_internal'):
torrent_desc += '/ Internal'
torrentscore += 200
if encoding == 'x264' and self.conf('favor') in ['encode', 'both']:
torrentscore += 300
if re.search('Remux', encoding) and self.conf('favor') in ['remux', 'both']:
torrentscore += 200
results.append({
'id': torrent_id,
'name': re.sub('[^A-Za-z0-9\-_ \(\).]+', '', '%s (%s) %s' % (name, year, torrent_desc)),
'url': self.urls['download'] % (torrent_id, authkey, self.conf('passkey')),
'detail_url': self.urls['detail'] % torrent_id,
'size': self.parseSize(entry.find('size').get_text()),
'seeders': tryInt(entry.find('seeders').get_text()),
'leechers': tryInt(entry.find('leechers').get_text()),
'score': torrentscore
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
config = [{
'name': 'awesomehd',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'Awesome-HD',
'description': '<a href="https://awesome-hd.net">AHD</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAC+UlEQVR4AV1SO0y6dxQ9H4g8CoIoohZ5NA0aR2UgkYpNB5uocTSaLlrDblMH09Gt8d90r3YpJkanxjA4GGkbO7RNxSABq8jDGnkpD+UD5NV7Bxvbk9wvv+/3uPece66A/yEWi42FQqHVfD7/cbPZtIEglUpjOp3uZHR0dBvAn3gDIRqNgjE4OKj0+Xzf3NzcfD4wMCCjf5TLZbTbbajVatzf3+Pu7q5uNpt35ufnvwBQAScQRREEldfr9RWLxan+/n5YrVa+jFarhVfQQyQSCU4EhULhX15engEgSrjC0dHRVqlUmjQYDBgaGgKtuTqz4mTgIoVCASaTCX19fajVapOHh4dbFJBks9mxcDi8qtFoJEajkfVyJWi1WkxMTMDhcIAT8x6D7/Dd6+vr1fHx8TGp2+3+iqo5+YCzBwIBToK5ubl/mQwPDyMSibAs2Gw2UHNRrValz8/PDUk8Hv9EqVRCr9fj4uICTNflcqFer+Pg4AB7e3uoVCq8x9Rxfn6O7u5uqFQq8FspZXxHTekggByA3W4Hr9PpNDeRL3I1cMhkMrBrnZ2dyGQyvNYIs7OzVbJNPjIyAraLwYdcjR8wXl5eIJfLwRIFQQDLYkm3t7c1CdGPPT4+cpOImp4PODMeaK+n10As2jBbrHifHOjS6qAguVFimkqlwAMmIQnHV1dX4NDQhVwuhyZTV6pgIktzDzkkk0lEwhEEzs7ASQr5Ai4vL1nuccfCwsLO/v6+p9FoyJhF6ekJro/cPCzIZLNQa7rQoK77/SdgWWpKkCaJ5EB9aWnpe6nH40nRMBnJV4f5gw+FX3/5GX/8/htXRZdOzzqhJWn6nl6YbTZqqhrhULD16fT0d8FgcFtYW1vD5uamfGVl5cd4IjldKhZACdkJvKfWUANrxEaJV4hiGVaL1b+7653hXzwRZQr2X76xsfG1xWIRaZzbNPv/CdrjEL9cX/+WXFBSgEPgzxuwG3Yans9OT0+naBZMIJDNfzudzp8WFxd/APAX3uAf9WOTxOPLdosAAAAASUVORK5CYII=',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'passkey',
'default': '',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'only_internal',
'advanced': True,
'type': 'bool',
'default': 1,
'description': 'Only search for internal releases.'
},
{
'name': 'prefer_internal',
'advanced': True,
'type': 'bool',
'default': 1,
'description': 'Favors internal releases over non-internal releases.'
},
{
'name': 'favor',
'advanced': True,
'default': 'both',
'type': 'dropdown',
'values': [('Encodes & Remuxes', 'both'), ('Encodes', 'encode'), ('Remuxes', 'remux'), ('None', 'none')],
'description': 'Give extra scoring to encodes or remuxes.'
},
{
'name': 'extra_score',
'advanced': True,
'type': 'int',
'default': 20,
'description': 'Starting score for each release found via this provider.',
},
],
},
],
}]

View File

@@ -0,0 +1,78 @@
import time
import traceback
from couchpotato.core.helpers.variable import getImdb, md5, cleanHost
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.base import YarrProvider
from couchpotato.environment import Env
log = CPLog(__name__)
class TorrentProvider(YarrProvider):
protocol = 'torrent'
proxy_domain = None
proxy_list = []
def imdbMatch(self, url, imdbId):
if getImdb(url) == imdbId:
return True
if url[:4] == 'http':
try:
cache_key = md5(url)
data = self.getCache(cache_key, url)
except IOError:
log.error('Failed to open %s.', url)
return False
return getImdb(data) == imdbId
return False
def getDomain(self, url = ''):
forced_domain = self.conf('domain')
if forced_domain:
return cleanHost(forced_domain).rstrip('/') + url
if not self.proxy_domain:
for proxy in self.proxy_list:
prop_name = 'proxy.%s' % proxy
last_check = float(Env.prop(prop_name, default = 0))
if last_check > time.time() - 86400:
continue
data = ''
try:
data = self.urlopen(proxy, timeout = 3, show_error = False)
except:
log.debug('Failed %s proxy %s: %s', (self.getName(), proxy, traceback.format_exc()))
if self.correctProxy(data):
log.debug('Using proxy for %s: %s', (self.getName(), proxy))
self.proxy_domain = proxy
break
Env.prop(prop_name, time.time())
if not self.proxy_domain:
log.error('No %s proxies left, please add one in settings, or let us know which one to add on the forum.', self.getName())
return None
return cleanHost(self.proxy_domain).rstrip('/') + url
def correctProxy(self, data):
return True
class TorrentMagnetProvider(TorrentProvider):
protocol = 'torrent_magnet'
download = None

View File

@@ -0,0 +1,139 @@
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'http://www.bit-hdtv.com/',
'login': 'http://www.bit-hdtv.com/takelogin.php',
'login_check': 'http://www.bit-hdtv.com/messages.php',
'detail': 'http://www.bit-hdtv.com/details.php?id=%s',
'search': 'http://www.bit-hdtv.com/torrents.php?',
}
# Searches for movies only - BiT-HDTV's subcategory and resolution search filters appear to be broken
http_time_between_calls = 1 # Seconds
def _search(self, media, quality, results):
query = self.buildUrl(media, quality)
url = "%s&%s" % (self.urls['search'], query)
data = self.getHTMLData(url)
if data:
# Remove BiT-HDTV's output garbage so outdated BS4 versions successfully parse the HTML
split_data = data.partition('-->')
if '## SELECT COUNT(' in split_data[0]:
data = split_data[2]
html = BeautifulSoup(data)
try:
result_table = html.find('table', attrs = {'width': '750', 'class': ''})
if result_table is None:
return
entries = result_table.find_all('tr')
for result in entries[1:]:
cells = result.find_all('td')
link = cells[2].find('a')
torrent_id = link['href'].replace('/details.php?id=', '')
results.append({
'id': torrent_id,
'name': link.contents[0].get_text(),
'url': cells[0].find('a')['href'],
'detail_url': self.urls['detail'] % torrent_id,
'size': self.parseSize(cells[6].get_text()),
'seeders': tryInt(cells[8].string),
'leechers': tryInt(cells[9].string),
'get_more_info': self.getMoreInfo,
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
}
def getMoreInfo(self, item):
full_description = self.getCache('bithdtv.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)
html = BeautifulSoup(full_description)
nfo_pre = html.find('table', attrs = {'class': 'detail'})
description = toUnicode(nfo_pre.text) if nfo_pre else ''
item['description'] = description
return item
def loginSuccess(self, output):
return 'logout.php' in output.lower()
loginCheckSuccess = loginSuccess
config = [{
'name': 'bithdtv',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'BiT-HDTV',
'description': '<a href="http://bit-hdtv.com">BiT-HDTV</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAABnRSTlMAAAAAAABupgeRAAABMklEQVR4AZ3Qu0ojcQCF8W9MJcQbJNgEEQUbQVIqWgnaWfkIvoCgggixEAmIhRtY2GV3w7KwU61B0EYIxmiw0YCik84ipaCuc0nmP5dcjIUgOjqDvxf4OAdf9mnMLcUJyPyGSCP+YRdC+Kp8iagJKhuS+InYRhTGgDbeV2uEMand4ZRxizjXHQEimxhraAnUr73BNqQxMiNeV2SwcjTLEVtb4Zl10mXutvOWm2otw5Sxz6TGTbdd6ncuYvVLXAXrvM+ruyBpy1S3JLGDfUQ1O6jn5vTsrJXvqSt4UNfj6vxTRPxBHER5QeSirhLGk/5rWN+ffB1XZuxjnDy1q87m7TS+xOGA+Iv4gfkbaw+nOMXHDHnITGEk0VfRFnn4Po4vNYm6RGukmggR0L08+l+e4HMeASo/i6AJUjLgAAAAAElFTkSuQmCC',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 20,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,134 @@
import traceback
from bs4 import BeautifulSoup, SoupStrainer
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'https://www.bitsoup.me/',
'login': 'https://www.bitsoup.me/takelogin.php',
'login_check': 'https://www.bitsoup.me/my.php',
'search': 'https://www.bitsoup.me/browse.php?%s',
'baseurl': 'https://www.bitsoup.me/%s',
}
http_time_between_calls = 1 # Seconds
only_tables_tags = SoupStrainer('table')
def _searchOnTitle(self, title, movie, quality, results):
url = self.urls['search'] % self.buildUrl(title, movie, quality)
data = self.getHTMLData(url)
if data:
html = BeautifulSoup(data, 'html.parser', parse_only = self.only_tables_tags)
try:
result_table = html.find('table', attrs = {'class': 'koptekst'})
if not result_table or 'nothing found!' in data.lower():
return
entries = result_table.find_all('tr')
for result in entries[1:]:
all_cells = result.find_all('td')
torrent = all_cells[1].find('a')
download = all_cells[3].find('a')
torrent_id = torrent['href']
torrent_id = torrent_id.replace('details.php?id=', '')
torrent_id = torrent_id.replace('&hit=1', '')
torrent_name = torrent.getText()
torrent_size = self.parseSize(all_cells[7].getText())
torrent_seeders = tryInt(all_cells[9].getText())
torrent_leechers = tryInt(all_cells[10].getText())
torrent_url = self.urls['baseurl'] % download['href']
torrent_detail_url = self.urls['baseurl'] % torrent['href']
results.append({
'id': torrent_id,
'name': torrent_name,
'size': torrent_size,
'seeders': torrent_seeders,
'leechers': torrent_leechers,
'url': torrent_url,
'detail_url': torrent_detail_url,
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'ssl': 'yes',
}
def loginSuccess(self, output):
return 'logout.php' in output.lower()
loginCheckSuccess = loginSuccess
config = [{
'name': 'bitsoup',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'Bitsoup',
'description': '<a href="https://bitsoup.me">Bitsoup</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAB8ElEQVR4AbWSS2sTURiGz3euk0mswaE37HhNhIrajQheFgF3rgR/lAt/gOBCXNZlo6AbqfUWRVCxi04wqUnTRibpJLaJzdzOOZ6WUumyC5/VHOb9eN/FA91uFx0FjI4IPfgiGLTWH73tn348GKmN7ijD0d2b41fO5qJEaX24AWNIUrVQCTTJ3Llx6vbV6Vtzk7Gi9+ebi996guFDDYAQAVj4FExP5qdOZB49W62t/zH3hECcwsPnbWeMXz6Xi2K1f0ApeK3hMCHHbP5gvvoriBgFAAQJEAxhjJ4u+YWTNsVI6b1JgtPWZkoIefKy4fcii2OTw2BABs7wj3bYDlLL4rvjGWOdTser1j5Xf7c3Q/MbHQYApxItvnm31mhQQ71eX2vUB76/vsWB2hg0QuogrMwLIG8P3InM2/eVGXeDViqVwWB79vRU2lgJYmdHcgXCTAXQFJTN5HguvDCR2Hxsxe8EvT54nlcul5vNpqDIEgwRQanAhAAABgRIyiQcjpIkkTOuWyqVoN/vSylX67XXH74uV1vHRUyxxFqbLBCSmBpiXSq6xcL5QrGYzWZ3XQIAwdlOJB+/aL764ucdmncYs0WsCI7kvTnn+qyDMEnTVCn1Tz5KsBFg6fvWcmsUAcnYNC/g2hnromvvqbHvxv+39S+MX+bWkFXwAgAAAABJRU5ErkJggg==',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 20,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,116 @@
import re
import json
import traceback
from couchpotato.core.helpers.variable import tryInt, getIdentifier
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'https://hdbits.org/',
'detail': 'https://hdbits.org/details.php?id=%s',
'download': 'https://hdbits.org/download.php?id=%s&passkey=%s',
'api': 'https://hdbits.org/api/torrents'
}
http_time_between_calls = 1 # Seconds
def _post_query(self, **params):
post_data = {
'username': self.conf('username'),
'passkey': self.conf('passkey')
}
post_data.update(params)
try:
result = self.getJsonData(self.urls['api'], data = json.dumps(post_data))
if result:
if result['status'] != 0:
log.error('Error searching hdbits: %s' % result['message'])
else:
return result['data']
except:
pass
return None
def _search(self, movie, quality, results):
match = re.match(r'tt(\d{7})', getIdentifier(movie))
data = self._post_query(imdb = {'id': match.group(1)})
if data:
try:
for result in data:
results.append({
'id': result['id'],
'name': result['name'],
'url': self.urls['download'] % (result['id'], self.conf('passkey')),
'detail_url': self.urls['detail'] % result['id'],
'size': tryInt(result['size']) / 1024 / 1024,
'seeders': tryInt(result['seeders']),
'leechers': tryInt(result['leechers'])
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
config = [{
'name': 'hdbits',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'HDBits',
'wizard': True,
'description': '<a href="http://hdbits.org">HDBits</a>',
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAABi0lEQVR4AZWSzUsbQRjGdyabTcvSNPTSHlpQQeMHJApC8CJRvHgQQU969+LJP8G7f4N3DwpeFRQvRr0EKaUl0ATSpkigUNFsMl/r9NmZLCEHA/nNO5PfvMPDm0DI6fV3ZxiolEICe1oZCBVCCmBPKwOh2ErKBHGE4KYEXBpSLkUlqO4LcM7f+6nVhRnOhSkOz/hexk+tL+YL0yPF2YmN4tynD++4gTLGkNNac9YFLoREBR1+cnF3dFY6v/m6PD+FaXiNJtgA4xYbABxiGrz6+6HWaI5/+Qh37YS0/3Znc8UxwNGBIIBX22z+/ZdJ+4wzyjpR4PEpODg8tgUXBv2iWUzSpa12B0IR6n6lvt8Aek2lZHb084+fdRNgrwY8z81PjhVy2d2ttUrtV/lbBa+JXGEpDMPnoF2tN1QYRqVUtf6nFbThb7wk7le395elcqhASLb39okDiHY00VCtCTEHwSiH4AI0lkOiT1dwMeSfT3SRxiQWNO7Zwj1egkoVIQFMKvSiC3bcjXq9Jf8DcDIRT3hh10kAAAAASUVORK5CYII=',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'passkey',
'default': '',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
},
],
},
],
}]

View File

@@ -0,0 +1,195 @@
import re
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import toUnicode, tryUrlencode
from couchpotato.core.helpers.variable import tryInt, splitString
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'download': 'https://www.ilovetorrents.me/%s',
'detail': 'https://www.ilovetorrents.me/%s',
'search': 'https://www.ilovetorrents.me/browse.php?search=%s&page=%s&cat=%s',
'test': 'https://www.ilovetorrents.me/',
'login': 'https://www.ilovetorrents.me/takelogin.php',
'login_check': 'https://www.ilovetorrents.me'
}
cat_ids = [
(['41'], ['720p', '1080p', 'brrip']),
(['19'], ['cam', 'ts', 'dvdrip', 'tc', 'r5', 'scr']),
(['20'], ['dvdr'])
]
cat_backup_id = 200
disable_provider = False
http_time_between_calls = 1
def _searchOnTitle(self, title, movie, quality, results):
page = 0
total_pages = 1
cats = self.getCatId(quality)
while page < total_pages:
movieTitle = tryUrlencode('"%s" %s' % (title, movie['info']['year']))
search_url = self.urls['search'] % (movieTitle, page, cats[0])
page += 1
data = self.getHTMLData(search_url)
if data:
try:
results_table = None
data_split = splitString(data, '<table')
soup = None
for x in data_split:
soup = BeautifulSoup(x)
results_table = soup.find('table', attrs = {'class': 'koptekst'})
if results_table:
break
if not results_table:
return
try:
pagelinks = soup.findAll(href = re.compile('page'))
page_numbers = [int(re.search('page=(?P<page_number>.+'')', i['href']).group('page_number')) for i in pagelinks]
total_pages = max(page_numbers)
except:
pass
entries = results_table.find_all('tr')
for result in entries[1:]:
prelink = result.find(href = re.compile('details.php'))
link = prelink['href']
download = result.find('a', href = re.compile('download.php'))['href']
if link and download:
def extra_score(item):
trusted = (0, 10)[result.find('img', alt = re.compile('Trusted')) is not None]
vip = (0, 20)[result.find('img', alt = re.compile('VIP')) is not None]
confirmed = (0, 30)[result.find('img', alt = re.compile('Helpers')) is not None]
moderated = (0, 50)[result.find('img', alt = re.compile('Moderator')) is not None]
return confirmed + trusted + vip + moderated
id = re.search('id=(?P<id>\d+)&', link).group('id')
url = self.urls['download'] % download
fileSize = self.parseSize(result.select('td.rowhead')[5].text)
results.append({
'id': id,
'name': toUnicode(prelink.find('b').text),
'url': url,
'detail_url': self.urls['detail'] % link,
'size': fileSize,
'seeders': tryInt(result.find_all('td')[2].string),
'leechers': tryInt(result.find_all('td')[3].string),
'extra_score': extra_score,
'get_more_info': self.getMoreInfo
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'submit': 'Welcome to ILT',
}
def getMoreInfo(self, item):
cache_key = 'ilt.%s' % item['id']
description = self.getCache(cache_key)
if not description:
try:
full_description = self.getHTMLData(item['detail_url'])
html = BeautifulSoup(full_description)
nfo_pre = html.find('td', attrs = {'class': 'main'}).findAll('table')[1]
description = toUnicode(nfo_pre.text) if nfo_pre else ''
except:
log.error('Failed getting more info for %s', item['name'])
description = ''
self.setCache(cache_key, description, timeout = 25920000)
item['description'] = description
return item
def loginSuccess(self, output):
return 'logout.php' in output.lower()
loginCheckSuccess = loginSuccess
config = [{
'name': 'ilovetorrents',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'ILoveTorrents',
'description': 'Where the Love of Torrents is Born',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAACPUlEQVR4AYWM0U9SbxjH3+v266I/oNvWZTfd2J1d0ZqbZEFwWrUImOKs4YwtumFKZvvlJJADR2TCQQlMPKg5NmpREgaekAPnBATKgmK1LqQlx6awHnZWF1Tr2Xfvvs+7z+dB0mlO7StpAh+M4S/2jbo3w8+xvJvlnSneEt+10zwer5ujNUOoChjALWFw5XOwdCAk/P57cGvPl+Oht0W7VJHN5NC1uW1BON4hGjXbwpVWMZhsy9v7sEIXAsDNYBXgdkEoIKyWD2CF8ut/aOXTZc/fBSgLWw1BgA4BDHOV0GkT90cBQpXahU5TFomsb38XhJC5/Tbh1P8c6rJlBeGfAeyMhUFwNVcs9lxV9Ot0dwmyd+mrNvRtbJ2fSPC6Z3Vsvub2z3sDFACAAYzk0+kUyxEkyfN7PopqNBro55A+P6yPKIrL5zF1HwjdeBJJCObIsZO79bo3sHhWhglo5WMV3mazuVPb4fLvSL8/FAkB1hK6rXQPwYhMyROK8VK5LAiH/jsMt0HQjxiN4/ePdoilllcqDyt3Mkg8mRBNbIhMb8RERkowQA/p76g0/UDDdCoNmDminM0qSK5vlpE5kugCHhNPxntwWmJPYTMZtYcFR6ABHQsVRlYLukVORaaULvqKI46keFSCv77kSPS6kxrPptLNDHgz16fWBtyxe6v5h08LUy+KI8ushqTPWWIX8Sg6b45IrGtyW6zXFb/hpQf9m3oqfWuB0fpSw0uZ4WB69En69uOk2rmO2V52PXj+A/mI4ESKpb2HAAAAAElFTkSuQmCC',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False
},
{
'name': 'username',
'label': 'Username',
'type': 'string',
'default': '',
'description': 'The user name for your ILT account',
},
{
'name': 'password',
'label': 'Password',
'type': 'password',
'default': '',
'description': 'The password for your ILT account.',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
}
]
}]

View File

@@ -0,0 +1,172 @@
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
import six
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'https://www.iptorrents.com/',
'base_url': 'https://www.iptorrents.com',
'login': 'https://www.iptorrents.com/torrents/',
'login_check': 'https://www.iptorrents.com/inbox.php',
'search': 'https://www.iptorrents.com/torrents/?%s%%s&q=%s&qf=ti&p=%%d',
}
http_time_between_calls = 1 # Seconds
cat_backup_id = None
def buildUrl(self, title, media, quality):
return self._buildUrl(title.replace(':', ''), quality)
def _buildUrl(self, query, quality):
cat_ids = self.getCatId(quality)
if not cat_ids:
log.warning('Unable to find category ids for identifier "%s"', quality.get('identifier'))
return None
return self.urls['search'] % ("&".join(("l%d=" % x) for x in cat_ids), tryUrlencode(query).replace('%', '%%'))
def _searchOnTitle(self, title, media, quality, results):
freeleech = '' if not self.conf('freeleech') else '&free=on'
base_url = self.buildUrl(title, media, quality)
if not base_url: return
pages = 1
current_page = 1
while current_page <= pages and not self.shuttingDown():
data = self.getHTMLData(base_url % (freeleech, current_page))
if data:
html = BeautifulSoup(data)
try:
page_nav = html.find('span', attrs = {'class': 'page_nav'})
if page_nav:
next_link = page_nav.find("a", text = "Next")
if next_link:
final_page_link = next_link.previous_sibling.previous_sibling
pages = int(final_page_link.string)
result_table = html.find('table', attrs = {'class': 'torrents'})
if not result_table or 'nothing found!' in data.lower():
return
entries = result_table.find_all('tr')
for result in entries[1:]:
torrent = result.find_all('td')
if len(torrent) <= 1:
break
torrent = torrent[1].find('a')
torrent_id = torrent['href'].replace('/details.php?id=', '')
torrent_name = six.text_type(torrent.string)
torrent_download_url = self.urls['base_url'] + (result.find_all('td')[3].find('a'))['href'].replace(' ', '.')
torrent_details_url = self.urls['base_url'] + torrent['href']
torrent_size = self.parseSize(result.find_all('td')[5].string)
torrent_seeders = tryInt(result.find('td', attrs = {'class': 'ac t_seeders'}).string)
torrent_leechers = tryInt(result.find('td', attrs = {'class': 'ac t_leechers'}).string)
results.append({
'id': torrent_id,
'name': torrent_name,
'url': torrent_download_url,
'detail_url': torrent_details_url,
'size': torrent_size,
'seeders': torrent_seeders,
'leechers': torrent_leechers,
})
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
break
current_page += 1
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'login': 'submit',
}
def loginSuccess(self, output):
return 'don\'t have an account' not in output.lower()
def loginCheckSuccess(self, output):
return '/logout.php' in output.lower()
config = [{
'name': 'iptorrents',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'IPTorrents',
'description': '<a href="http://www.iptorrents.com">IPTorrents</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABRklEQVR42qWQO0vDUBiG8zeKY3EqQUtNO7g0J6ZJ1+ifKIIFQXAqDYKCyaaYxM3udrZLHdRFhXrZ6liCW6mubfk874EESgqaeOCF7/Y8hEh41aq6yZi2nyZgBGya9XKtZs4No05pAkZV2YbEmyMMsoSxLQeC46wCTdPPY4HruPQyGIhF97qLWsS78Miydn4XdK46NJ9OsQAYBzMIMf8MQ9wtCnTdWCaIDx/u7uljOIQEe0hiIWPamSTLay3+RxOCSPI9+RJAo7Er9r2bnqjBFAqyK+VyK4f5/Cr5ni8OFKVCz49PFI5GdNvvU7ttE1M1zMU+8AMqFksEhrMnQsBDzqmDAwzx2ehRLwT7yyCI+vSC99c3mozH1NxrJgWWtR1BOECfEJSVCm6WCzJGCA7+IWhBsM4zywDPwEp4vCjx2DzBH2ODAfsDb33Ps6dQwJgAAAAASUVORK5CYII=',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'freeleech',
'default': 0,
'type': 'bool',
'description': 'Only search for [FreeLeech] torrents.',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,182 @@
import re
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.variable import tryInt, getIdentifier
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentMagnetProvider
log = CPLog(__name__)
class Base(TorrentMagnetProvider):
urls = {
'detail': '%s/%s',
'search': '%s/%s-i%s/',
}
cat_ids = [
(['cam'], ['cam']),
(['telesync'], ['ts', 'tc']),
(['screener', 'tvrip'], ['screener']),
(['x264', '720p', '1080p', 'blu-ray', 'hdrip'], ['bd50', '1080p', '720p', 'brrip']),
(['dvdrip'], ['dvdrip']),
(['dvd'], ['dvdr']),
]
http_time_between_calls = 1 # Seconds
cat_backup_id = None
proxy_list = [
'https://kickass.to',
'http://kickass.pw',
'http://kickassto.come.in',
'http://katproxy.ws',
'http://www.kickassunblock.info',
'http://www.kickassproxy.info',
'http://katph.eu',
'http://kickassto.come.in',
]
def _search(self, media, quality, results):
data = self.getHTMLData(self.urls['search'] % (self.getDomain(), 'm', getIdentifier(media).replace('tt', '')))
if data:
cat_ids = self.getCatId(quality)
table_order = ['name', 'size', None, 'age', 'seeds', 'leechers']
try:
html = BeautifulSoup(data)
resultdiv = html.find('div', attrs = {'class': 'tabs'})
for result in resultdiv.find_all('div', recursive = False):
if result.get('id').lower().strip('tab-') not in cat_ids:
continue
try:
for temp in result.find_all('tr'):
if temp['class'] is 'firstr' or not temp.get('id'):
continue
new = {}
nr = 0
for td in temp.find_all('td'):
column_name = table_order[nr]
if column_name:
if column_name == 'name':
link = td.find('div', {'class': 'torrentname'}).find_all('a')[2]
new['id'] = temp.get('id')[-7:]
new['name'] = link.text
new['url'] = td.find('a', 'imagnet')['href']
new['detail_url'] = self.urls['detail'] % (self.getDomain(), link['href'][1:])
new['verified'] = True if td.find('a', 'iverify') else False
new['score'] = 100 if new['verified'] else 0
elif column_name is 'size':
new['size'] = self.parseSize(td.text)
elif column_name is 'age':
new['age'] = self.ageToDays(td.text)
elif column_name is 'seeds':
new['seeders'] = tryInt(td.text)
elif column_name is 'leechers':
new['leechers'] = tryInt(td.text)
nr += 1
# Only store verified torrents
if self.conf('only_verified') and not new['verified']:
continue
results.append(new)
except:
log.error('Failed parsing KickAssTorrents: %s', traceback.format_exc())
except AttributeError:
log.debug('No search results found.')
def ageToDays(self, age_str):
age = 0
age_str = age_str.replace('&nbsp;', ' ')
regex = '(\d*.?\d+).(sec|hour|day|week|month|year)+'
matches = re.findall(regex, age_str)
for match in matches:
nr, size = match
mult = 1
if size == 'week':
mult = 7
elif size == 'month':
mult = 30.5
elif size == 'year':
mult = 365
age += tryInt(nr) * mult
return tryInt(age)
def isEnabled(self):
return super(Base, self).isEnabled() and self.getDomain()
def correctProxy(self, data):
return 'search query' in data.lower()
config = [{
'name': 'kickasstorrents',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'KickAssTorrents',
'description': '<a href="https://kat.ph/">KickAssTorrents</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAACD0lEQVR42pXK20uTcRjA8d/fsJsuap0orBuFlm3hir3JJvQOVmuwllN20Lb2isI2nVHKjBqrCWYaNnNuBrkSWxglhDVJOkBdSWUOq5FgoiOrMdRJ2xPPxW+8OUf1ge/FcyCUSVe2qedK5U/OxNTTXRNXEQ52Glb4O6dNEfK1auJkvRY7+/zxnQbA/D596laXcY3OWOiaIX2393SGznUmxkUo/YkDgqHemuzobQ7+NV+reo5Q1mqp68GABdY3+/EloO+JeN4tEqiFU8f3CwhyWo9E7wfMgI0ELTDx0AvjIxcgvZoC9P7NMN7yMmrFeoKa68rfDfmrARsNN0Ihr55cx59ctZWSiwS5bLKpwW4dYJH+M/B6/CYszE0BFZ+egG+Ln+HRoBN/cpl1pV6COIMkOnBVA/w+fXgGKJVM4LxhumMleoL06hJ3wKcCfl+/TAKKx17gnFePRwkqxR4BQSpFkbCrrQJueI7mWpyfATQ9OQY43+uv/+PutBycJ3y2qn2x7jY50GJvnwLKZjOwspyE5I8F4N+1yr1uwqcs3ym63Hwo29EiAyzUWQVr6WVAS4lZCPutQG/2GtES2YiW3d3XflYKtL72kzAcdEDHeSa3czeIMyyz/TApRKvcFfE0isHbJMnrHCf6xTLb1ORvWNlWo91cvHrJUQo0o6ZoRi7dIiT/g2WEDi27Iyov21xMCvgNfXvtwIACfHwAAAAASUVORK5CYII=',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': True,
},
{
'name': 'domain',
'advanced': True,
'label': 'Proxy server',
'description': 'Domain for requests, keep empty to let CouchPotato pick.',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'only_verified',
'advanced': True,
'type': 'bool',
'default': False,
'description': 'Only search for verified releases.'
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,267 @@
import htmlentitydefs
import json
import re
import time
import traceback
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import getTitle, tryInt, mergeDicts, getIdentifier
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
from dateutil.parser import parse
import six
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'domain': 'https://tls.passthepopcorn.me',
'detail': 'https://tls.passthepopcorn.me/torrents.php?torrentid=%s',
'torrent': 'https://tls.passthepopcorn.me/torrents.php',
'login': 'https://tls.passthepopcorn.me/ajax.php?action=login',
'login_check': 'https://tls.passthepopcorn.me/ajax.php?action=login',
'search': 'https://tls.passthepopcorn.me/search/%s/0/7/%d'
}
http_time_between_calls = 2
def _search(self, media, quality, results):
movie_title = getTitle(media)
quality_id = quality['identifier']
params = mergeDicts(self.quality_search_params[quality_id].copy(), {
'order_by': 'relevance',
'order_way': 'descending',
'searchstr': getIdentifier(media)
})
url = '%s?json=noredirect&%s' % (self.urls['torrent'], tryUrlencode(params))
res = self.getJsonData(url)
try:
if not 'Movies' in res:
return
authkey = res['AuthKey']
passkey = res['PassKey']
for ptpmovie in res['Movies']:
if not 'Torrents' in ptpmovie:
log.debug('Movie %s (%s) has NO torrents', (ptpmovie['Title'], ptpmovie['Year']))
continue
log.debug('Movie %s (%s) has %d torrents', (ptpmovie['Title'], ptpmovie['Year'], len(ptpmovie['Torrents'])))
for torrent in ptpmovie['Torrents']:
torrent_id = tryInt(torrent['Id'])
torrentdesc = '%s %s %s' % (torrent['Resolution'], torrent['Source'], torrent['Codec'])
torrentscore = 0
if 'GoldenPopcorn' in torrent and torrent['GoldenPopcorn']:
torrentdesc += ' HQ'
if self.conf('prefer_golden'):
torrentscore += 5000
if 'Scene' in torrent and torrent['Scene']:
torrentdesc += ' Scene'
if self.conf('prefer_scene'):
torrentscore += 2000
if 'RemasterTitle' in torrent and torrent['RemasterTitle']:
torrentdesc += self.htmlToASCII(' %s' % torrent['RemasterTitle'])
torrentdesc += ' (%s)' % quality_id
torrent_name = re.sub('[^A-Za-z0-9\-_ \(\).]+', '', '%s (%s) - %s' % (movie_title, ptpmovie['Year'], torrentdesc))
def extra_check(item):
return self.torrentMeetsQualitySpec(item, quality_id)
results.append({
'id': torrent_id,
'name': torrent_name,
'Source': torrent['Source'],
'Checked': 'true' if torrent['Checked'] else 'false',
'Resolution': torrent['Resolution'],
'url': '%s?action=download&id=%d&authkey=%s&torrent_pass=%s' % (self.urls['torrent'], torrent_id, authkey, passkey),
'detail_url': self.urls['detail'] % torrent_id,
'date': tryInt(time.mktime(parse(torrent['UploadTime']).timetuple())),
'size': tryInt(torrent['Size']) / (1024 * 1024),
'seeders': tryInt(torrent['Seeders']),
'leechers': tryInt(torrent['Leechers']),
'score': torrentscore,
'extra_check': extra_check,
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
def torrentMeetsQualitySpec(self, torrent, quality):
if not quality in self.post_search_filters:
return True
reqs = self.post_search_filters[quality].copy()
if self.conf('require_approval'):
log.debug('Config: Require staff-approval activated')
reqs['Checked'] = ['true']
for field, specs in reqs.items():
matches_one = False
seen_one = False
if not field in torrent:
log.debug('Torrent with ID %s has no field "%s"; cannot apply post-search-filter for quality "%s"', (torrent['id'], field, quality))
continue
for spec in specs:
if len(spec) > 0 and spec[0] == '!':
# a negative rule; if the field matches, return False
if torrent[field] == spec[1:]:
return False
else:
# a positive rule; if any of the possible positive values match the field, return True
log.debug('Checking if torrents field %s equals %s' % (field, spec))
seen_one = True
if torrent[field] == spec:
log.debug('Torrent satisfied %s == %s' % (field, spec))
matches_one = True
if seen_one and not matches_one:
log.debug('Torrent did not satisfy requirements, ignoring')
return False
return True
def htmlToUnicode(self, text):
def fixup(m):
txt = m.group(0)
if txt[:2] == "&#":
# character reference
try:
if txt[:3] == "&#x":
return unichr(int(txt[3:-1], 16))
else:
return unichr(int(txt[2:-1]))
except ValueError:
pass
else:
# named entity
try:
txt = unichr(htmlentitydefs.name2codepoint[txt[1:-1]])
except KeyError:
pass
return txt # leave as is
return re.sub("&#?\w+;", fixup, six.u('%s') % text)
def unicodeToASCII(self, text):
import unicodedata
return ''.join(c for c in unicodedata.normalize('NFKD', text) if unicodedata.category(c) != 'Mn')
def htmlToASCII(self, text):
return self.unicodeToASCII(self.htmlToUnicode(text))
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'passkey': self.conf('passkey'),
'keeplogged': '1',
'login': 'Login'
}
def loginSuccess(self, output):
try:
return json.loads(output).get('Result', '').lower() == 'ok'
except:
return False
loginCheckSuccess = loginSuccess
config = [{
'name': 'passthepopcorn',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'PassThePopcorn',
'description': '<a href="https://passthepopcorn.me">PassThePopcorn.me</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAARklEQVQoz2NgIAP8BwMiGWRpIN1JNWn/t6T9f532+W8GkNt7vzz9UkfarZVpb68BuWlbnqW1nU7L2DMx7eCoBlpqGOppCQB83zIgIg+wWQAAAABJRU5ErkJggg==',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False
},
{
'name': 'domain',
'advanced': True,
'label': 'Proxy server',
'description': 'Domain for requests (HTTPS only!), keep empty to use default (tls.passthepopcorn.me).',
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'passkey',
'default': '',
},
{
'name': 'prefer_golden',
'advanced': True,
'type': 'bool',
'label': 'Prefer golden',
'default': 1,
'description': 'Favors Golden Popcorn-releases over all other releases.'
},
{
'name': 'prefer_scene',
'advanced': True,
'type': 'bool',
'label': 'Prefer scene',
'default': 0,
'description': 'Favors scene-releases over non-scene releases.'
},
{
'name': 'require_approval',
'advanced': True,
'type': 'bool',
'label': 'Require approval',
'default': 0,
'description': 'Require staff-approval for releases to be accepted.'
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 20,
'description': 'Starting score for each release found via this provider.',
}
],
}
]
}]

View File

@@ -0,0 +1,135 @@
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'https://www.sceneaccess.eu/',
'login': 'https://www.sceneaccess.eu/login',
'login_check': 'https://www.sceneaccess.eu/inbox',
'detail': 'https://www.sceneaccess.eu/details?id=%s',
'search': 'https://www.sceneaccess.eu/browse?c%d=%d',
'archive': 'https://www.sceneaccess.eu/archive?&c%d=%d',
'download': 'https://www.sceneaccess.eu/%s',
}
http_time_between_calls = 1 # Seconds
def _searchOnTitle(self, title, media, quality, results):
url = self.buildUrl(title, media, quality)
data = self.getHTMLData(url)
if data:
html = BeautifulSoup(data)
try:
resultsTable = html.find('table', attrs = {'id': 'torrents-table'})
if resultsTable is None:
return
entries = resultsTable.find_all('tr', attrs = {'class': 'tt_row'})
for result in entries:
link = result.find('td', attrs = {'class': 'ttr_name'}).find('a')
url = result.find('td', attrs = {'class': 'td_dl'}).find('a')
leechers = result.find('td', attrs = {'class': 'ttr_leechers'}).find('a')
torrent_id = link['href'].replace('details?id=', '')
results.append({
'id': torrent_id,
'name': link['title'],
'url': self.urls['download'] % url['href'],
'detail_url': self.urls['detail'] % torrent_id,
'size': self.parseSize(result.find('td', attrs = {'class': 'ttr_size'}).contents[0]),
'seeders': tryInt(result.find('td', attrs = {'class': 'ttr_seeders'}).find('a').string),
'leechers': tryInt(leechers.string) if leechers else 0,
'get_more_info': self.getMoreInfo,
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
def getMoreInfo(self, item):
full_description = self.getCache('sceneaccess.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)
html = BeautifulSoup(full_description)
nfo_pre = html.find('div', attrs = {'id': 'details_table'})
description = toUnicode(nfo_pre.text) if nfo_pre else ''
item['description'] = description
return item
# Login
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'submit': 'come on in',
}
def loginSuccess(self, output):
return '/inbox' in output.lower()
loginCheckSuccess = loginSuccess
config = [{
'name': 'sceneaccess',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'SceneAccess',
'description': '<a href="https://sceneaccess.eu/">SceneAccess</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAAABnRSTlMAAAAAAABupgeRAAACT0lEQVR4AYVQS0sbURidO3OTmajJ5FElTTOkPmZ01GhHrIq0aoWAj1Vc+A/cuRMXbl24V9SlCGqrLhVFCrooEhCp2BAx0mobTY2kaR7qmOm87EXL1EWxh29xL+c7nPMdgGHYO5bF/gdbefnr6WlbWRnxluMwAB4Z0uEgXa7nwaDL7+/RNPzxbYvb/XJ0FBYVfd/ayh0fQ4qCGEHcm0KLRZUk7Pb2YRJPRwcsKMidnKD3t9VVT3s7BDh+z5FOZ3Vfn3h+Hltfx00mRRSRWFcUmmVNhYVqPn8dj3va2oh+txvcQRVF9ebm1fi4k+dRFbosY5rm4Hk7xxULQnJnx93S4g0EIEEQRoDLo6PrWEw8Pc0eHLwYGopMTDirqlJ7eyhYYGHhfgfHCcKYksZGVB/NcXI2mw6HhZERqrjYTNPHi4tFPh8aJIYIhgPlcCRDoZLW1s75+Z/7+59nZ/OJhLWigqAoKZX6Mjf3dXkZ3pydGYLc4aEoCCkInzQ1fRobS2xuvllaonkedfArnY5OTdGVldBkOADgqq2Nr6z8CIWaJietDHOhKB+HhwFKC6Gnq4ukKJvP9zcSbjYDXbeVlkKzuZBhnnV3e3t6UOmaJO0ODibW1hB1GYkg8R/gup7Z3TVZLJ5AILW9LcZiVpYtYBhw16O3t7cauckyeF9Tgz0ATpL2+nopmWycmbnY2LiKRjFk6/d7+/vRJfl4HGzV1T0UIM43MGBvaIBWK/YvwM5w+IMgGH8tkyEgvIpE7M3Nt6qqZrNyOq1kMmouh455Ggz+BhKY4GEc2CfwAAAAAElFTkSuQmCC',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 20,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,172 @@
import re
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import toUnicode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentMagnetProvider
import six
log = CPLog(__name__)
class Base(TorrentMagnetProvider):
urls = {
'detail': '%s/torrent/%s',
'search': '%s/search/%%s/%%s/7/%%s'
}
cat_backup_id = 200
disable_provider = False
http_time_between_calls = 0
proxy_list = [
'https://nobay.net',
'https://thebay.al',
'https://thepiratebay.se',
'http://thepiratebay.cd',
'http://thebootlegbay.com',
'http://www.tpb.gr',
'http://tpbproxy.co.uk',
'http://pirateproxy.in',
'http://www.getpirate.com',
'http://piratebay.io',
'http://bayproxy.li',
'http://proxybay.pw',
]
def _search(self, media, quality, results):
page = 0
total_pages = 1
cats = self.getCatId(quality)
base_search_url = self.urls['search'] % self.getDomain()
while page < total_pages:
search_url = base_search_url % self.buildUrl(media, page, cats)
page += 1
data = self.getHTMLData(search_url)
if data:
try:
soup = BeautifulSoup(data)
results_table = soup.find('table', attrs = {'id': 'searchResult'})
if not results_table:
return
try:
total_pages = len(soup.find('div', attrs = {'align': 'center'}).find_all('a'))
except:
pass
entries = results_table.find_all('tr')
for result in entries[1:]:
link = result.find(href = re.compile('torrent\/\d+\/'))
download = result.find(href = re.compile('magnet:'))
try:
size = re.search('Size (?P<size>.+),', six.text_type(result.select('font.detDesc')[0])).group('size')
except:
continue
if link and download:
def extra_score(item):
trusted = (0, 10)[result.find('img', alt = re.compile('Trusted')) is not None]
vip = (0, 20)[result.find('img', alt = re.compile('VIP')) is not None]
confirmed = (0, 30)[result.find('img', alt = re.compile('Helpers')) is not None]
moderated = (0, 50)[result.find('img', alt = re.compile('Moderator')) is not None]
return confirmed + trusted + vip + moderated
results.append({
'id': re.search('/(?P<id>\d+)/', link['href']).group('id'),
'name': six.text_type(link.string),
'url': download['href'],
'detail_url': self.getDomain(link['href']),
'size': self.parseSize(size),
'seeders': tryInt(result.find_all('td')[2].string),
'leechers': tryInt(result.find_all('td')[3].string),
'extra_score': extra_score,
'get_more_info': self.getMoreInfo
})
except:
log.error('Failed getting results from %s: %s', (self.getName(), traceback.format_exc()))
def isEnabled(self):
return super(Base, self).isEnabled() and self.getDomain()
def correctProxy(self, data):
return 'title="Pirate Search"' in data
def getMoreInfo(self, item):
full_description = self.getCache('tpb.%s' % item['id'], item['detail_url'], cache_timeout = 25920000)
html = BeautifulSoup(full_description)
nfo_pre = html.find('div', attrs = {'class': 'nfo'})
description = ''
try:
description = toUnicode(nfo_pre.text)
except:
pass
item['description'] = description
return item
config = [{
'name': 'thepiratebay',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'ThePirateBay',
'description': 'The world\'s largest bittorrent tracker. <a href="http://fucktimkuik.org/">ThePirateBay</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAAAAAA6mKC9AAAA3UlEQVQY02P4DwT/YADIZvj//7qnozMYODmtAAusZoCDELDAegYGViZhAWZmRoYoqIDupfhNN1M3dTBEggXWMZg9jZRXV77YxhAOFpjDwMAPMoCXmcHsF1SAQZ6bQY2VgUEbKHClcAYzg3mINEO8jSCD478/DPsZmvqWblu1bOmStes3Pp0ezVDF4Gif0Hfx9///74/ObRZ2YNiZ47C8XIRBxFJR0jbSSUud4f9zAQWn8NTuziAt2zy5xIMM/z8LFX0E+fD/x0MRDCeA1v7Z++Y/FDzyvAtyBxIA+h8A8ZKLeT+lJroAAAAASUVORK5CYII=',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False
},
{
'name': 'domain',
'advanced': True,
'label': 'Proxy server',
'description': 'Domain for requests, keep empty to let CouchPotato pick.',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
}
]
}]

View File

@@ -0,0 +1,136 @@
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.encoding import tryUrlencode
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'https://www.torrentbytes.net/',
'login': 'https://www.torrentbytes.net/takelogin.php',
'login_check': 'https://www.torrentbytes.net/inbox.php',
'detail': 'https://www.torrentbytes.net/details.php?id=%s',
'search': 'https://www.torrentbytes.net/browse.php?search=%s&cat=%d',
'download': 'https://www.torrentbytes.net/download.php?id=%s&name=%s',
}
cat_ids = [
([5], ['720p', '1080p', 'bd50']),
([19], ['cam']),
([19], ['ts', 'tc']),
([19], ['r5', 'scr']),
([19], ['dvdrip']),
([19], ['brrip']),
([20], ['dvdr']),
]
http_time_between_calls = 1 # Seconds
cat_backup_id = None
def _searchOnTitle(self, title, movie, quality, results):
url = self.urls['search'] % (tryUrlencode('%s %s' % (title.replace(':', ''), movie['info']['year'])), self.getCatId(quality)[0])
data = self.getHTMLData(url)
if data:
html = BeautifulSoup(data)
try:
result_table = html.find('table', attrs = {'border': '1'})
if not result_table:
return
entries = result_table.find_all('tr')
for result in entries[1:]:
cells = result.find_all('td')
link = cells[1].find('a', attrs = {'class': 'index'})
full_id = link['href'].replace('details.php?id=', '')
torrent_id = full_id[:6]
results.append({
'id': torrent_id,
'name': link.contents[0],
'url': self.urls['download'] % (torrent_id, link.contents[0]),
'detail_url': self.urls['detail'] % torrent_id,
'size': self.parseSize(cells[6].contents[0] + cells[6].contents[2]),
'seeders': tryInt(cells[8].find('span').contents[0]),
'leechers': tryInt(cells[9].find('span').contents[0]),
})
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'login': 'submit',
}
def loginSuccess(self, output):
return 'logout.php' in output.lower() or 'Welcome' in output.lower()
loginCheckSuccess = loginSuccess
config = [{
'name': 'torrentbytes',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'TorrentBytes',
'description': '<a href="http://torrentbytes.net">TorrentBytes</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAAeFBMVEUAAAAAAEQAA1QAEmEAKnQALHYAMoEAOokAQpIASYsASZgAS5UATZwATosATpgAVJ0AWZwAYZ4AZKAAaZ8Ab7IAcbMAfccAgcQAgcsAhM4AiscAjMkAmt0AoOIApecAp/EAqvQAs+kAt+wA3P8A4f8A//8VAAAfDbiaAl08AAAAjUlEQVQYGQXBO04DQRAFwHqz7Z8sECIl5f73ISRD5GBs7UxTlWfg9vYXnvJRQJqOL88D6BAwJtMMumHUVCl60aa6H93IrIv0b+157f1lpk+fm87lMWrZH0vncKbXdRUQrRmrh9C6Iwkq6rg4PXZcyXmbizzeV/g+rDra0rGve8jPKLSOJNi2AQAwAGjwD7ApPkEHdtPQAAAAAElFTkSuQmCC',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 20,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,114 @@
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'http://www.td.af/',
'login': 'http://www.td.af/torrents/',
'login_check': 'http://www.torrentday.com/userdetails.php',
'detail': 'http://www.td.af/details.php?id=%s',
'search': 'http://www.td.af/V3/API/API.php',
'download': 'http://www.td.af/download.php/%s/%s',
}
http_time_between_calls = 1 # Seconds
def _searchOnTitle(self, title, media, quality, results):
query = '"%s" %s' % (title, media['info']['year'])
data = {
'/browse.php?': None,
'cata': 'yes',
'jxt': 8,
'jxw': 'b',
'search': query,
}
data = self.getJsonData(self.urls['search'], data = data)
try: torrents = data.get('Fs', [])[0].get('Cn', {}).get('torrents', [])
except: return
for torrent in torrents:
results.append({
'id': torrent['id'],
'name': torrent['name'],
'url': self.urls['download'] % (torrent['id'], torrent['fname']),
'detail_url': self.urls['detail'] % torrent['id'],
'size': self.parseSize(torrent.get('size')),
'seeders': tryInt(torrent.get('seed')),
'leechers': tryInt(torrent.get('leech')),
})
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'submit.x': 18,
'submit.y': 11,
'submit': 'submit',
}
def loginSuccess(self, output):
return 'Password not correct' not in output
def loginCheckSuccess(self, output):
return 'logout.php' in output.lower()
config = [{
'name': 'torrentday',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'TorrentDay',
'description': '<a href="http://www.td.af/">TorrentDay</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAC5ElEQVQ4y12TXUgUURTH//fO7Di7foeQJH6gEEEIZZllVohfSG/6UA+RSFAQQj74VA8+Bj30lmAlRVSEvZRfhNhaka5ZUG1paKaW39tq5O6Ou+PM3M4o6m6X+XPPzD3zm/+dcy574r515WfIW8CZBM4YAA5Gc/aQC3yd7oXYEONcsISE5dTDh91HS0t7FEWhBUAeN9ynV/d9qJAgE4AECURAcVsGlCCnly26LMA0IQwTa52dje3d3e3hcPi8qqrrMjcVYI3EHCQZlkFOHBwR2QHh2ASAAIJxWGAQEDxjePhs3527XjJwnb37OHBq0T+Tyyjh+9KnEzNJ7nouc1Q/3A3HGsOvnJy+PSUlj81w2Lny9WuJ6+3AmTjD4HOcrdR2dWXLRQePvyaSLfQOPMPC8mC9iHCsOxSyzJCelzdSXlNzD5ujpb25Wbfc/XXJemTXF4+nnCNq+AMLe50uFfEJTiw4GXSFtiHL0SnIq66+p0kSArqO+eH3RdsAv9+f5vW7L7GICq6rmM8XBCAXlBw90rOyxibn5yzfkg/L09M52/jxqdESaIrBXHYZZbB1GX8cEpySxKIB8S5XcOnvqpli1zuwmrTtoLjw5LOK/eeuWsE4JH5IRPaPZKiKigmPp+5pa+u1aEjIMhEgrRkmi9mgxGUhM7LNJSzOzsE3+cOeExovXOjdytE0LV4zqNZUtV0uZzAGoGkhDH/2YHZiErmv4uyWQnZZWc+hoqL3WzlTExN5hhA8IEwkZWZOxwB++30YG/9GkYCPvqAaHAW5uWPROW86OmqCprUR7z1yZDAGQNuCvkoB/baIKUBWMTYymv+gra3eJNvjXu+B562tFyXqTJ6YuHK8rKwvBmC3vR7cOCPQLWFz8LnfXWUrJo9U19BwMyUlJRjTSMJ2ENxUiGxq9KXQfwqYlnWstvbR5aamG9g0uzM8Q4OFt++3NNixQ2NgYmeN03FOTUv7XVpV9aKisvLl1vN/WVhNc/Fi1NEAAAAASUVORK5CYII=',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 0,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

View File

@@ -0,0 +1,126 @@
import traceback
from bs4 import BeautifulSoup
from couchpotato.core.helpers.variable import tryInt
from couchpotato.core.logger import CPLog
from couchpotato.core.media._base.providers.torrent.base import TorrentProvider
import six
log = CPLog(__name__)
class Base(TorrentProvider):
urls = {
'test': 'http://www.torrentleech.org/',
'login': 'http://www.torrentleech.org/user/account/login/',
'login_check': 'http://torrentleech.org/user/messages',
'detail': 'http://www.torrentleech.org/torrent/%s',
'search': 'http://www.torrentleech.org/torrents/browse/index/query/%s/categories/%d',
'download': 'http://www.torrentleech.org%s',
}
http_time_between_calls = 1 # Seconds
cat_backup_id = None
def _searchOnTitle(self, title, media, quality, results):
url = self.urls['search'] % self.buildUrl(title, media, quality)
data = self.getHTMLData(url)
if data:
html = BeautifulSoup(data)
try:
result_table = html.find('table', attrs = {'id': 'torrenttable'})
if not result_table:
return
entries = result_table.find_all('tr')
for result in entries[1:]:
link = result.find('td', attrs = {'class': 'name'}).find('a')
url = result.find('td', attrs = {'class': 'quickdownload'}).find('a')
details = result.find('td', attrs = {'class': 'name'}).find('a')
results.append({
'id': link['href'].replace('/torrent/', ''),
'name': six.text_type(link.string),
'url': self.urls['download'] % url['href'],
'detail_url': self.urls['download'] % details['href'],
'size': self.parseSize(result.find_all('td')[4].string),
'seeders': tryInt(result.find('td', attrs = {'class': 'seeders'}).string),
'leechers': tryInt(result.find('td', attrs = {'class': 'leechers'}).string),
})
except:
log.error('Failed to parsing %s: %s', (self.getName(), traceback.format_exc()))
def getLoginParams(self):
return {
'username': self.conf('username'),
'password': self.conf('password'),
'remember_me': 'on',
'login': 'submit',
}
def loginSuccess(self, output):
return '/user/account/logout' in output.lower() or 'welcome back' in output.lower()
loginCheckSuccess = loginSuccess
config = [{
'name': 'torrentleech',
'groups': [
{
'tab': 'searcher',
'list': 'torrent_providers',
'name': 'TorrentLeech',
'description': '<a href="http://torrentleech.org">TorrentLeech</a>',
'wizard': True,
'icon': 'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAIAAACQkWg2AAACHUlEQVR4AZVSO48SYRSdGTCBEMKzILLAWiybkKAGMZRUUJEoDZX7B9zsbuQPYEEjNLTQkYgJDwsoSaxspEBsCITXjjNAIKi8AkzceXgmbHQ1NJ5iMufmO9/9zrmXlCSJ+B8o75J8Pp/NZj0eTzweBy0Wi4PBYD6f12o1r9ebTCZx+22HcrnMsuxms7m6urTZ7LPZDMVYLBZ8ZV3yo8aq9Pq0wzCMTqe77dDv9y8uLyAWBH6xWOyL0K/56fcb+rrPgPZ6PZfLRe1fsl6vCUmGKIqoqNXqdDr9Dbjps9znUV0uTqdTjuPkDoVCIfcuJ4gizjMMm8u9vW+1nr04czqdK56c37CbKY9j2+1WEARZ0Gq1RFHAz2q1qlQqXxoN69HRcDjUarW8ZD6QUigUOnY8uKYH8N1sNkul9yiGw+F6vS4Rxn8EsodEIqHRaOSnq9T7ajQazWQycEIR1AEBYDabSZJyHDucJyegwWBQr9ebTCaKvHd4cCQANUU9evwQ1Ofz4YvUKUI43GE8HouSiFiNRhOowWBIpVLyHITJkuW3PwgAEf3pgIwxF5r+OplMEsk3CPT5szCMnY7EwUdhwUh/CXiej0Qi3idPz89fdrpdbsfBzH7S3Q9K5pP4c0sAKpVKoVAQGO1ut+t0OoFAQHkH2Da/3/+but3uarWK0ZMQoNdyucRutdttmqZxMTzY7XaYxsrgtUjEZrNhkSwWyy/0NCatZumrNQAAAABJRU5ErkJggg==',
'options': [
{
'name': 'enabled',
'type': 'enabler',
'default': False,
},
{
'name': 'username',
'default': '',
},
{
'name': 'password',
'default': '',
'type': 'password',
},
{
'name': 'seed_ratio',
'label': 'Seed ratio',
'type': 'float',
'default': 1,
'description': 'Will not be (re)moved until this seed ratio is met.',
},
{
'name': 'seed_time',
'label': 'Seed time',
'type': 'int',
'default': 40,
'description': 'Will not be (re)moved until this seed time (in hours) is met.',
},
{
'name': 'extra_score',
'advanced': True,
'label': 'Extra Score',
'type': 'int',
'default': 20,
'description': 'Starting score for each release found via this provider.',
}
],
},
],
}]

Some files were not shown because too many files have changed in this diff Show More