jellyfin-kodi/resources/lib/libraries/requests/utils.py
angelblue05 bab67ddf9b
Version 4.0.0 (#182)
* Adjust refresh behavior

* Fix favorites

* Add option to mask info

* Fix keymap delete

* Fix empty show

* Version bump 3.1.14

* Reset rescan flag

* Fix subtitles encoding

* Fix path verification

* Fix update library

Plug in remove library percentage

* Fix unauthorized behavior

Reprompt user with login

* Fix series pooling

* Version bump 3.1.15

* Fix for additional users

Return all users, not just public users

* Fix http potential errors

Prevent from going further if {server} or {userid} is requested but not filled to avoid 401 errors

* Fix extra fanart

* Fix patch

make a case insensitive search

* Version bump 3.1.16

Additional logging, fix kodi source.

* Fix library tags on update

* Version bump 3.1.17

* Fix season artwork

* Fix season artwork

* Fix logging

* Fix blank files sources

* Add backup option

* Fix userdata song

* Transfer data.txt to data.json

Use default port for webserver caching

* Fix mixed content shortcut

* Fix path encoding for patch

Hopefully this works...

* Fix source nonetype error

Just incase, wrap in a try/except because it's not important.

* Base fast sync on server time

Try to fix music video refresh to prevent cursor from moving up.

* Prep subfolders for dynamic

Support homevideos for now

* Fix empty artist, missing Title

* Version bump 3.1.18a

* Version bump for objects

171076013

* Notify user of large updates

Give option to back out if the user wants to manually update the libraries

* Fix sources.xml verification

* Prevent error in monitor

Put in place try/except in case data is None

* Remember sync position for manual triggers

Allow to resume sync on restart for manual user triggers (update, repair). Automatically refresh boxsets if movie library is selected.

use waitForAbort and emby_should_stop prop to terminate threads

* Update string for sync later

* Add subfolders for dynamic movies

* Small fixes

* Version bump 3.1.19

* Fix fast sync

try/except, default back to previous behavior.

* Fix artwork

* Change settings name

To ensure it takes default value instead of previous value set in 3.0.34

* Fix transcode

flac and live tv

* Fix episodes for series pooling

* Add live tv support

* Version bump 3.1.20

* Revert "Small fixes"

This reverts commit 9ec1fa3585.

* Version bump 3.1.21

* Fix playback starting server connection instance

* Fix show update

* Fix boxsets

* Fix lastplayed

* Patch to support pre 3.6 libraries

* Fix slowness

* Plug in settings for threading

* Plug in settings for threading

* Adjust sleep behavior

* Version bump 3.1.22

* Fix server detection in monitor

* Version bump 3.1.23

* Fix potential error with checksum

* Fix missing new artists

* Fix library sync

Adjust lock, re-add screensaver deactivated during sync, prep compare sync, stop library updates from being processed before startup sync is completed

* Version bump 3.1.25

* Fix local trailers

* Adjust lock modification

* Check db version

* Prevent error from creating nodes

The addon automatically creates nodes at startup with prefilled information. Prevent errors in the event something goes wrong. It will fix itself down the line, after user has logged in.

* Version bump 3.1.26

* Revert "Version bump 3.1.26"

This reverts commit c583a69a4b.

* Fix screensaver toggle

* Fix source selection for direct stream

* Version bump 3.1.26

* Add progress for updates

* Revise progress bar

Fix typos and subsetting

* content notification

* Remove content with update library

Now remove irrelevant content as well

* Fix slowness

* Version bump 3.1.27

* Stop trying to get items if server offline

* Fix content type for dynamic music

* Fix resume sync

Now save progress, unless exited due to path validation

* Fix artwork for shortcuts on profile switch

* Add force transcode settings

* Fix audiobooks back to video type

Add shortcuts. Audiobook can't be music type otherwise it break resume behavior and it won't play the right item. Has to be video type.

* Update general info

To finish, download and installation

* Update README.md

* Move welcome message to service

* Prevent patch loop

Try once, then let it go, to avoid locking user in a restart loop

* Review library threads

* Prep for audiobook transcode

Still need to implement universal for audio transcode

* Version bump 3.1.28

* Fix emby database locked

* Fix regression to welcome message

* Version bump 3.1.29

* Adjust playback

Allow direct play for http streams

* Ensure all threads are terminated correctly

* Fix empty results due to error 500

* Fix boxset refresh

* Fix resume sync behavior

Allow to complete the startup sync in the event user backs out of resume sync

* Version bump 3.1.30

* Update patch

Move patch from cache to addon_data. No longer need to restart Kodi to apply the first patch.

* Fix inital sync leading to fast sync

* Fix user settings

Due to api change in 3.6.0.55

* krypton update

* Adjust for resume settings

With .55 the resume setting is set per library. Instead query server to see if the item is played to offer delete

* Restart service upon check for updates

To reload the new objects module.

* Fix update library

Only do the compare when user selects update library, also add a restart service option in the add-on settings > advanced

* Version bump 3.1.31

* Update dependencies

* Update FR translation

* Update DE translation

* Add translation

* Support up next

* Small service adjustment

* Krypton update to support upnext

* Add a verification onwake

Somehow, Kodi can trigger OnWake without first trigger OnSleep.

* Fix loading if special char in path

* Add logging and small fixes

Prepare userdata by date modified

* Version bump 3.1.32

* Change default behavior of startup dialog

In case it is forced closed by Kodi, allow the sync to proceed

* Ensure deliveryurl is an actual url

* Update README.md

* Fix nextup

* Fix dynamic widgets

* Detect coreelect, etc

* Fix progress report

Silent RefreshProgress in websocket

* Follow emby settings for subtitles

* Version bump 3.1.33

* Add Italian translation

* Fix playback for server 3.6.0.61

* Version bump 3.1.34a

* Add silent catch for errors

* Adjust playback progress monitor

Only track progress report if the item is an emby item

* Fix subtitles not following server settings

* Add remove libraries, fix mixed libraries

* Fix live tv

For now, use transcode since direct play returns a 127.0.0.1 unusable address as the path.

* Allow live tv to direct stream

* Fix LiveTV

* Add setting to sync during playback

* Fix updates

* Fix encoding error

* Add optional rotten tomatoes option

* Version bump 3.1.35

* Fix emby connect auth string

Was preventing proper device detection when using emby connect, play to, etc.

* Add setup RT

* Fix audio/sub change

Only for addon playback

* Add developer mode

* Update patch

Check for updates + dev = forced grab from github

* Fix RT string

* Fix patch

Allow dev mode to redownload zip

* Fix patch

ugh sleep!!

* Verify patch connection

* Version bump 3.1.36

* Fix libraries being wiped

Catch errors to prevent false positive

* Add dateutil library

* Prep convert to local time

* Fix string

* Prep for multi db version support

* Fix service restart

* Add shortcut restart addon

Add notification

* Add database discovery

* Ensure previous playback terminated

* Update translation

New: Polish, Dutch Updated: German, French, Italian

* Version bump 3.1.37

* Quick fix for new library dateutil

* Catch error for dateutil

In the event the server has some weird date that can't be converted

* Version bump 3.1.38

* Fix dateutil import

* Fix db discovery

Ignore emby.db

* Version bump 3.1.39

* Add a delay if setup not completed

Avoid crash from everything loading at once.

* Fix database discovery

Add table verification + date modified verification

* Container optional playutils

* Version bump 3.1.40

* Adjust database discovery

Compare loaded vs discovered to avoid loading old databases by accident.

* Version bump 3.1.41

* Fix discovery toggle

* Version bump 3.1.42

* Add webservice for playback prep

* Fix service restart

* Version bump 3.1.43

* Update default sync indicator

Based on overall feedback

* Fix check update

* Fix if server is selected but unavailable

* Support songs without albums

* Fix encode and params

* Increase retry timeout

* Fix update generating duplicates

* Add manage libraries

Too many entries

* Fix database discovery

* Fixed transcode via context menu

* Fix context transcode

* Quiet webservice

* Update Krypton objects

* Fix database discovery prompt

* fixed video listitem issues for krypton

* load all item details for playlists

* Fix playlist

* Version bump 3.1.44

* Fix force hi10p transcoding behavior

Fixes the "Force Hi10p transcoding" option to only apply to h264 video codecs

* Clear playlist on player.onstop

* Don't clear playlist if busy spinner is active

* Fix case sensitive issue at calling the log function

* fix db stuff (#164)

* Reload objects upon initial setup

* Fix database discovery

ignore db-journal

* Update translation

German, Italian

* Use LastConnectionMode for server test

* Fix compare sync

* Version bump 3.1.45

* Ensure widgets get updated

Container.Refresh alone doesn't seem to work

* Update database discovery

* Re-add texture to database discovery

* Add option to enable/disable service

* Remove unused strings

* Fix object reload upon restart service

* Update Krypton objects

* Update translation

Dutch, Polish

* Version bump 3.1.46

* Adjust client api

* Adjust subtitles behavior

* Fix string typo

* Only run one full sync instance

Prevent user from launching multiple syncs and freezing the add-on.

* added "playlists" to wnodes

* Disable Audiobooks

Server doesn't have a set structure yet. This feature is broken atm.

* Version bump 4.0.0

* License GPL v3

* Update readme
2019-01-24 07:04:48 -06:00

721 lines
21 KiB
Python

# -*- coding: utf-8 -*-
"""
requests.utils
~~~~~~~~~~~~~~
This module provides utility functions that are used within Requests
that are also useful for external consumption.
"""
import cgi
import codecs
import collections
import io
import os
import platform
import re
import sys
import socket
import struct
import warnings
from . import __version__
from . import certs
from .compat import parse_http_list as _parse_list_header
from .compat import (quote, urlparse, bytes, str, OrderedDict, unquote, is_py2,
builtin_str, getproxies, proxy_bypass, urlunparse,
basestring)
from .cookies import RequestsCookieJar, cookiejar_from_dict
from .structures import CaseInsensitiveDict
from .exceptions import InvalidURL, FileModeWarning
_hush_pyflakes = (RequestsCookieJar,)
NETRC_FILES = ('.netrc', '_netrc')
DEFAULT_CA_BUNDLE_PATH = certs.where()
def dict_to_sequence(d):
"""Returns an internal sequence dictionary update."""
if hasattr(d, 'items'):
d = d.items()
return d
def super_len(o):
total_length = 0
current_position = 0
if hasattr(o, '__len__'):
total_length = len(o)
elif hasattr(o, 'len'):
total_length = o.len
elif hasattr(o, 'getvalue'):
# e.g. BytesIO, cStringIO.StringIO
total_length = len(o.getvalue())
elif hasattr(o, 'fileno'):
try:
fileno = o.fileno()
except io.UnsupportedOperation:
pass
else:
total_length = os.fstat(fileno).st_size
# Having used fstat to determine the file length, we need to
# confirm that this file was opened up in binary mode.
if 'b' not in o.mode:
warnings.warn((
"Requests has determined the content-length for this "
"request using the binary size of the file: however, the "
"file has been opened in text mode (i.e. without the 'b' "
"flag in the mode). This may lead to an incorrect "
"content-length. In Requests 3.0, support will be removed "
"for files in text mode."),
FileModeWarning
)
if hasattr(o, 'tell'):
current_position = o.tell()
return max(0, total_length - current_position)
def get_netrc_auth(url, raise_errors=False):
"""Returns the Requests tuple auth for a given url from netrc."""
try:
from netrc import netrc, NetrcParseError
netrc_path = None
for f in NETRC_FILES:
try:
loc = os.path.expanduser('~/{0}'.format(f))
except KeyError:
# os.path.expanduser can fail when $HOME is undefined and
# getpwuid fails. See http://bugs.python.org/issue20164 &
# https://github.com/kennethreitz/requests/issues/1846
return
if os.path.exists(loc):
netrc_path = loc
break
# Abort early if there isn't one.
if netrc_path is None:
return
ri = urlparse(url)
# Strip port numbers from netloc. This weird `if...encode`` dance is
# used for Python 3.2, which doesn't support unicode literals.
splitstr = b':'
if isinstance(url, str):
splitstr = splitstr.decode('ascii')
host = ri.netloc.split(splitstr)[0]
try:
_netrc = netrc(netrc_path).authenticators(host)
if _netrc:
# Return with login / password
login_i = (0 if _netrc[0] else 1)
return (_netrc[login_i], _netrc[2])
except (NetrcParseError, IOError):
# If there was a parsing error or a permissions issue reading the file,
# we'll just skip netrc auth unless explicitly asked to raise errors.
if raise_errors:
raise
# AppEngine hackiness.
except (ImportError, AttributeError):
pass
def guess_filename(obj):
"""Tries to guess the filename of the given object."""
name = getattr(obj, 'name', None)
if (name and isinstance(name, basestring) and name[0] != '<' and
name[-1] != '>'):
return os.path.basename(name)
def from_key_val_list(value):
"""Take an object and test to see if it can be represented as a
dictionary. Unless it can not be represented as such, return an
OrderedDict, e.g.,
::
>>> from_key_val_list([('key', 'val')])
OrderedDict([('key', 'val')])
>>> from_key_val_list('string')
ValueError: need more than 1 value to unpack
>>> from_key_val_list({'key': 'val'})
OrderedDict([('key', 'val')])
"""
if value is None:
return None
if isinstance(value, (str, bytes, bool, int)):
raise ValueError('cannot encode objects that are not 2-tuples')
return OrderedDict(value)
def to_key_val_list(value):
"""Take an object and test to see if it can be represented as a
dictionary. If it can be, return a list of tuples, e.g.,
::
>>> to_key_val_list([('key', 'val')])
[('key', 'val')]
>>> to_key_val_list({'key': 'val'})
[('key', 'val')]
>>> to_key_val_list('string')
ValueError: cannot encode objects that are not 2-tuples.
"""
if value is None:
return None
if isinstance(value, (str, bytes, bool, int)):
raise ValueError('cannot encode objects that are not 2-tuples')
if isinstance(value, collections.Mapping):
value = value.items()
return list(value)
# From mitsuhiko/werkzeug (used with permission).
def parse_list_header(value):
"""Parse lists as described by RFC 2068 Section 2.
In particular, parse comma-separated lists where the elements of
the list may include quoted-strings. A quoted-string could
contain a comma. A non-quoted string could have quotes in the
middle. Quotes are removed automatically after parsing.
It basically works like :func:`parse_set_header` just that items
may appear multiple times and case sensitivity is preserved.
The return value is a standard :class:`list`:
>>> parse_list_header('token, "quoted value"')
['token', 'quoted value']
To create a header from the :class:`list` again, use the
:func:`dump_header` function.
:param value: a string with a list header.
:return: :class:`list`
"""
result = []
for item in _parse_list_header(value):
if item[:1] == item[-1:] == '"':
item = unquote_header_value(item[1:-1])
result.append(item)
return result
# From mitsuhiko/werkzeug (used with permission).
def parse_dict_header(value):
"""Parse lists of key, value pairs as described by RFC 2068 Section 2 and
convert them into a python dict:
>>> d = parse_dict_header('foo="is a fish", bar="as well"')
>>> type(d) is dict
True
>>> sorted(d.items())
[('bar', 'as well'), ('foo', 'is a fish')]
If there is no value for a key it will be `None`:
>>> parse_dict_header('key_without_value')
{'key_without_value': None}
To create a header from the :class:`dict` again, use the
:func:`dump_header` function.
:param value: a string with a dict header.
:return: :class:`dict`
"""
result = {}
for item in _parse_list_header(value):
if '=' not in item:
result[item] = None
continue
name, value = item.split('=', 1)
if value[:1] == value[-1:] == '"':
value = unquote_header_value(value[1:-1])
result[name] = value
return result
# From mitsuhiko/werkzeug (used with permission).
def unquote_header_value(value, is_filename=False):
r"""Unquotes a header value. (Reversal of :func:`quote_header_value`).
This does not use the real unquoting but what browsers are actually
using for quoting.
:param value: the header value to unquote.
"""
if value and value[0] == value[-1] == '"':
# this is not the real unquoting, but fixing this so that the
# RFC is met will result in bugs with internet explorer and
# probably some other browsers as well. IE for example is
# uploading files with "C:\foo\bar.txt" as filename
value = value[1:-1]
# if this is a filename and the starting characters look like
# a UNC path, then just return the value without quotes. Using the
# replace sequence below on a UNC path has the effect of turning
# the leading double slash into a single slash and then
# _fix_ie_filename() doesn't work correctly. See #458.
if not is_filename or value[:2] != '\\\\':
return value.replace('\\\\', '\\').replace('\\"', '"')
return value
def dict_from_cookiejar(cj):
"""Returns a key/value dictionary from a CookieJar.
:param cj: CookieJar object to extract cookies from.
"""
cookie_dict = {}
for cookie in cj:
cookie_dict[cookie.name] = cookie.value
return cookie_dict
def add_dict_to_cookiejar(cj, cookie_dict):
"""Returns a CookieJar from a key/value dictionary.
:param cj: CookieJar to insert cookies into.
:param cookie_dict: Dict of key/values to insert into CookieJar.
"""
cj2 = cookiejar_from_dict(cookie_dict)
cj.update(cj2)
return cj
def get_encodings_from_content(content):
"""Returns encodings from given content string.
:param content: bytestring to extract encodings from.
"""
warnings.warn((
'In requests 3.0, get_encodings_from_content will be removed. For '
'more information, please see the discussion on issue #2266. (This'
' warning should only appear once.)'),
DeprecationWarning)
charset_re = re.compile(r'<meta.*?charset=["\']*(.+?)["\'>]', flags=re.I)
pragma_re = re.compile(r'<meta.*?content=["\']*;?charset=(.+?)["\'>]', flags=re.I)
xml_re = re.compile(r'^<\?xml.*?encoding=["\']*(.+?)["\'>]')
return (charset_re.findall(content) +
pragma_re.findall(content) +
xml_re.findall(content))
def get_encoding_from_headers(headers):
"""Returns encodings from given HTTP Header Dict.
:param headers: dictionary to extract encoding from.
"""
content_type = headers.get('content-type')
if not content_type:
return None
content_type, params = cgi.parse_header(content_type)
if 'charset' in params:
return params['charset'].strip("'\"")
if 'text' in content_type:
return 'ISO-8859-1'
def stream_decode_response_unicode(iterator, r):
"""Stream decodes a iterator."""
if r.encoding is None:
for item in iterator:
yield item
return
decoder = codecs.getincrementaldecoder(r.encoding)(errors='replace')
for chunk in iterator:
rv = decoder.decode(chunk)
if rv:
yield rv
rv = decoder.decode(b'', final=True)
if rv:
yield rv
def iter_slices(string, slice_length):
"""Iterate over slices of a string."""
pos = 0
while pos < len(string):
yield string[pos:pos + slice_length]
pos += slice_length
def get_unicode_from_response(r):
"""Returns the requested content back in unicode.
:param r: Response object to get unicode content from.
Tried:
1. charset from content-type
2. fall back and replace all unicode characters
"""
warnings.warn((
'In requests 3.0, get_unicode_from_response will be removed. For '
'more information, please see the discussion on issue #2266. (This'
' warning should only appear once.)'),
DeprecationWarning)
tried_encodings = []
# Try charset from content-type
encoding = get_encoding_from_headers(r.headers)
if encoding:
try:
return str(r.content, encoding)
except UnicodeError:
tried_encodings.append(encoding)
# Fall back:
try:
return str(r.content, encoding, errors='replace')
except TypeError:
return r.content
# The unreserved URI characters (RFC 3986)
UNRESERVED_SET = frozenset(
"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
+ "0123456789-._~")
def unquote_unreserved(uri):
"""Un-escape any percent-escape sequences in a URI that are unreserved
characters. This leaves all reserved, illegal and non-ASCII bytes encoded.
"""
parts = uri.split('%')
for i in range(1, len(parts)):
h = parts[i][0:2]
if len(h) == 2 and h.isalnum():
try:
c = chr(int(h, 16))
except ValueError:
raise InvalidURL("Invalid percent-escape sequence: '%s'" % h)
if c in UNRESERVED_SET:
parts[i] = c + parts[i][2:]
else:
parts[i] = '%' + parts[i]
else:
parts[i] = '%' + parts[i]
return ''.join(parts)
def requote_uri(uri):
"""Re-quote the given URI.
This function passes the given URI through an unquote/quote cycle to
ensure that it is fully and consistently quoted.
"""
safe_with_percent = "!#$%&'()*+,/:;=?@[]~"
safe_without_percent = "!#$&'()*+,/:;=?@[]~"
try:
# Unquote only the unreserved characters
# Then quote only illegal characters (do not quote reserved,
# unreserved, or '%')
return quote(unquote_unreserved(uri), safe=safe_with_percent)
except InvalidURL:
# We couldn't unquote the given URI, so let's try quoting it, but
# there may be unquoted '%'s in the URI. We need to make sure they're
# properly quoted so they do not cause issues elsewhere.
return quote(uri, safe=safe_without_percent)
def address_in_network(ip, net):
"""
This function allows you to check if on IP belongs to a network subnet
Example: returns True if ip = 192.168.1.1 and net = 192.168.1.0/24
returns False if ip = 192.168.1.1 and net = 192.168.100.0/24
"""
ipaddr = struct.unpack('=L', socket.inet_aton(ip))[0]
netaddr, bits = net.split('/')
netmask = struct.unpack('=L', socket.inet_aton(dotted_netmask(int(bits))))[0]
network = struct.unpack('=L', socket.inet_aton(netaddr))[0] & netmask
return (ipaddr & netmask) == (network & netmask)
def dotted_netmask(mask):
"""
Converts mask from /xx format to xxx.xxx.xxx.xxx
Example: if mask is 24 function returns 255.255.255.0
"""
bits = 0xffffffff ^ (1 << 32 - mask) - 1
return socket.inet_ntoa(struct.pack('>I', bits))
def is_ipv4_address(string_ip):
try:
socket.inet_aton(string_ip)
except socket.error:
return False
return True
def is_valid_cidr(string_network):
"""Very simple check of the cidr format in no_proxy variable"""
if string_network.count('/') == 1:
try:
mask = int(string_network.split('/')[1])
except ValueError:
return False
if mask < 1 or mask > 32:
return False
try:
socket.inet_aton(string_network.split('/')[0])
except socket.error:
return False
else:
return False
return True
def should_bypass_proxies(url):
"""
Returns whether we should bypass proxies or not.
"""
get_proxy = lambda k: os.environ.get(k) or os.environ.get(k.upper())
# First check whether no_proxy is defined. If it is, check that the URL
# we're getting isn't in the no_proxy list.
no_proxy = get_proxy('no_proxy')
netloc = urlparse(url).netloc
if no_proxy:
# We need to check whether we match here. We need to see if we match
# the end of the netloc, both with and without the port.
no_proxy = (
host for host in no_proxy.replace(' ', '').split(',') if host
)
ip = netloc.split(':')[0]
if is_ipv4_address(ip):
for proxy_ip in no_proxy:
if is_valid_cidr(proxy_ip):
if address_in_network(ip, proxy_ip):
return True
else:
for host in no_proxy:
if netloc.endswith(host) or netloc.split(':')[0].endswith(host):
# The URL does match something in no_proxy, so we don't want
# to apply the proxies on this URL.
return True
# If the system proxy settings indicate that this URL should be bypassed,
# don't proxy.
# The proxy_bypass function is incredibly buggy on OS X in early versions
# of Python 2.6, so allow this call to fail. Only catch the specific
# exceptions we've seen, though: this call failing in other ways can reveal
# legitimate problems.
try:
bypass = proxy_bypass(netloc)
except (TypeError, socket.gaierror):
bypass = False
if bypass:
return True
return False
def get_environ_proxies(url):
"""Return a dict of environment proxies."""
if should_bypass_proxies(url):
return {}
else:
return getproxies()
def select_proxy(url, proxies):
"""Select a proxy for the url, if applicable.
:param url: The url being for the request
:param proxies: A dictionary of schemes or schemes and hosts to proxy URLs
"""
proxies = proxies or {}
urlparts = urlparse(url)
proxy = proxies.get(urlparts.scheme+'://'+urlparts.hostname)
if proxy is None:
proxy = proxies.get(urlparts.scheme)
return proxy
def default_user_agent(name="python-requests"):
"""Return a string representing the default user agent."""
return '%s/%s' % (name, __version__)
def default_headers():
return CaseInsensitiveDict({
'User-Agent': default_user_agent(),
'Accept-Encoding': ', '.join(('gzip', 'deflate')),
'Accept': '*/*',
'Connection': 'keep-alive',
})
def parse_header_links(value):
"""Return a dict of parsed link headers proxies.
i.e. Link: <http:/.../front.jpeg>; rel=front; type="image/jpeg",<http://.../back.jpeg>; rel=back;type="image/jpeg"
"""
links = []
replace_chars = " '\""
for val in re.split(", *<", value):
try:
url, params = val.split(";", 1)
except ValueError:
url, params = val, ''
link = {}
link["url"] = url.strip("<> '\"")
for param in params.split(";"):
try:
key, value = param.split("=")
except ValueError:
break
link[key.strip(replace_chars)] = value.strip(replace_chars)
links.append(link)
return links
# Null bytes; no need to recreate these on each call to guess_json_utf
_null = '\x00'.encode('ascii') # encoding to ASCII for Python 3
_null2 = _null * 2
_null3 = _null * 3
def guess_json_utf(data):
# JSON always starts with two ASCII characters, so detection is as
# easy as counting the nulls and from their location and count
# determine the encoding. Also detect a BOM, if present.
sample = data[:4]
if sample in (codecs.BOM_UTF32_LE, codecs.BOM32_BE):
return 'utf-32' # BOM included
if sample[:3] == codecs.BOM_UTF8:
return 'utf-8-sig' # BOM included, MS style (discouraged)
if sample[:2] in (codecs.BOM_UTF16_LE, codecs.BOM_UTF16_BE):
return 'utf-16' # BOM included
nullcount = sample.count(_null)
if nullcount == 0:
return 'utf-8'
if nullcount == 2:
if sample[::2] == _null2: # 1st and 3rd are null
return 'utf-16-be'
if sample[1::2] == _null2: # 2nd and 4th are null
return 'utf-16-le'
# Did not detect 2 valid UTF-16 ascii-range characters
if nullcount == 3:
if sample[:3] == _null3:
return 'utf-32-be'
if sample[1:] == _null3:
return 'utf-32-le'
# Did not detect a valid UTF-32 ascii-range character
return None
def prepend_scheme_if_needed(url, new_scheme):
'''Given a URL that may or may not have a scheme, prepend the given scheme.
Does not replace a present scheme with the one provided as an argument.'''
scheme, netloc, path, params, query, fragment = urlparse(url, new_scheme)
# urlparse is a finicky beast, and sometimes decides that there isn't a
# netloc present. Assume that it's being over-cautious, and switch netloc
# and path if urlparse decided there was no netloc.
if not netloc:
netloc, path = path, netloc
return urlunparse((scheme, netloc, path, params, query, fragment))
def get_auth_from_url(url):
"""Given a url with authentication components, extract them into a tuple of
username,password."""
parsed = urlparse(url)
try:
auth = (unquote(parsed.username), unquote(parsed.password))
except (AttributeError, TypeError):
auth = ('', '')
return auth
def to_native_string(string, encoding='ascii'):
"""
Given a string object, regardless of type, returns a representation of that
string in the native string type, encoding and decoding where necessary.
This assumes ASCII unless told otherwise.
"""
out = None
if isinstance(string, builtin_str):
out = string
else:
if is_py2:
out = string.encode(encoding)
else:
out = string.decode(encoding)
return out
def urldefragauth(url):
"""
Given a url remove the fragment and the authentication part
"""
scheme, netloc, path, params, query, fragment = urlparse(url)
# see func:`prepend_scheme_if_needed`
if not netloc:
netloc, path = path, netloc
netloc = netloc.rsplit('@', 1)[-1]
return urlunparse((scheme, netloc, path, params, query, ''))