Skip to content

Commit

Permalink
Merge pull request #13 from twitterdev/rename_package
Browse files Browse the repository at this point in the history
Rename package for pypi
  • Loading branch information
Aaron Gonzales committed Jan 8, 2018
2 parents 4a7bba4 + 402f583 commit fe7445c
Show file tree
Hide file tree
Showing 14 changed files with 72 additions and 63 deletions.
26 changes: 13 additions & 13 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Python Twitter Search API

This library serves as a python interface to the `Twitter premium and enterprise search APIs <https://developer.twitter.com/en/docs/tweets/search/overview/30-day-search>`_. It provides a command-line utility and a library usable from within python. It comes with tools for assisting in dynamic generation of search rules and for parsing tweets.

Pretty docs can be seen `here <https://twitterdev.github.io/twitter_search_api/>`_.
Pretty docs can be seen `here <https://twitterdev.github.io/search_tweets_api/>`_.


Features
Expand All @@ -22,26 +22,26 @@ Features
Installation
============

We will soon handle releases via PyPy, but you can also install the current master version via
We will host the package on PyPi so it's pip-friendly.

.. code:: bash
pip install git+https://github.com/twitterdev/twitter_search_api.git
pip install searchtweets
Or the development version locally via

.. code:: bash
git clone https://github.com/twitterdev/twitter_search_api.git
cd twitter_search_api
git clone https://github.com/twitterdev/search-tweets-python
cd search-tweets-python
pip install -e .
Using the Comand Line Application
=================================

We provide a utility, ``twitter_search.py``, in the ``tools`` directory that provides rapid access to tweets.
We provide a utility, ``search_tweets.py``, in the ``tools`` directory that provides rapid access to tweets.
Premium customers should use ``--bearer-token``; enterprise customers should use ``--user-name`` and ``--password``.

The ``--endpoint`` flag will specify the full URL of your connection, e.g.:
Expand All @@ -61,7 +61,7 @@ Note that the ``--results-per-call`` flag specifies an argument to the API call

.. code:: bash
python twitter_search.py \
python search_tweets.py \
--bearer-token <BEARER_TOKEN> \
--endpoint <MY_ENDPOINT> \
--max-results 1000 \
Expand All @@ -74,7 +74,7 @@ Note that the ``--results-per-call`` flag specifies an argument to the API call

.. code:: bash
python twitter_search.py \
python search_tweets.py \
--user-name <USERNAME> \
--password <PW> \
--endpoint <MY_ENDPOINT> \
Expand All @@ -89,7 +89,7 @@ Note that the ``--results-per-call`` flag specifies an argument to the API call

.. code:: bash
python twitter_search.py \
python search_tweets.py \
--user-name <USERNAME> \
--password <PW> \
--endpoint <MY_ENDPOINT> \
Expand Down Expand Up @@ -134,7 +134,7 @@ When using a config file in conjunction with the command-line utility, you need

Example::

python twitter_search_api.py \
python search_tweets.py \
--config-file myapiconfig.config \
--no-print-stream

Expand All @@ -160,7 +160,7 @@ Your credentials should be put into a YAML file that looks like this:
.. code:: yaml
twitter_search_api:
search_tweets_api:
endpoint: <FULL_URL_OF_ENDPOINT>
account: <ACCOUNT_NAME>
username: <USERNAME>
Expand All @@ -181,7 +181,7 @@ throughout your program's session.

.. code:: python
from twittersearch import ResultStream, gen_rule_payload, load_credentials
from searchtweets import ResultStream, gen_rule_payload, load_credentials
Enterprise setup
----------------
Expand Down Expand Up @@ -257,7 +257,7 @@ Let's see how it goes:

.. code:: python
from twittersearch import collect_results
from searchtweets import collect_results
.. code:: python
Expand Down
4 changes: 2 additions & 2 deletions docs/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
SPHINXPROJ = twittersearchapi
SPHINXPROJ = searchtweetsapi
SOURCEDIR = source
BUILDDIR = build

Expand All @@ -17,4 +17,4 @@ help:
# Catch-all target: route all unknown targets to Sphinx using the new
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
%: Makefile
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
16 changes: 8 additions & 8 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
master_doc = 'index'

# General information about the project.
project = 'twitter search api'
project = 'Twitter Search APIs Python Wrapper'
copyright = '2017, twitterdev'
author = 'twitterdev'

Expand All @@ -64,9 +64,9 @@
# built documents.
#
# The short X.Y version.
version = '0.1'
version = '1.0'
# The full version, including alpha/beta/rc tags.
release = '0.1'
release = '1.0b'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down Expand Up @@ -112,7 +112,7 @@
# -- Options for HTMLHelp output ------------------------------------------

# Output file base name for HTML help builder.
htmlhelp_basename = 'twittersearchdoc'
htmlhelp_basename = 'searchtweetsdoc'


# -- Options for LaTeX output ---------------------------------------------
Expand All @@ -139,7 +139,7 @@
# (source start file, target name, title,
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(master_doc, 'twittersearch.tex', 'twitter search api Documentation',
(master_doc, 'searchtweets.tex', 'twitter search api Documentation',
'twitterdev', 'manual'),
]

Expand All @@ -149,7 +149,7 @@
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
(master_doc, 'twittersearch', 'twitter search api Documentation',
(master_doc, 'searchtweets', 'twitter search api Documentation',
[author], 1)
]

Expand All @@ -160,8 +160,8 @@
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
(master_doc, 'twittersearch', 'twitter search api Documentation',
author, 'twittersearch', 'One line description of project.',
(master_doc, 'searchtweets', 'twitter search api Documentation',
author, 'searchtweets', 'One line description of project.',
'Miscellaneous'),
]

Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
:caption: Contents:

self
twittersearch
searchtweets



Expand Down
4 changes: 2 additions & 2 deletions docs/source/modules.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
twittersearch
searchtweets
=============

.. toctree::
:maxdepth: 4

twittersearch
searchtweets
16 changes: 8 additions & 8 deletions docs/source/twittersearch.rst → docs/source/searchtweets.rst
Original file line number Diff line number Diff line change
@@ -1,29 +1,29 @@
twittersearch package
searchtweets package
=====================

Submodules
----------

twittersearch\.api\_utils module
searchtweets\.api\_utils module
--------------------------------

.. automodule:: twittersearch.api_utils
.. automodule:: searchtweets.api_utils
:members:
:undoc-members:
:show-inheritance:

twittersearch\.result\_stream module
searchtweets\.result\_stream module
------------------------------------

.. automodule:: twittersearch.result_stream
.. automodule:: searchtweets.result_stream
:members:
:undoc-members:
:show-inheritance:

twittersearch\.utils module
searchtweets\.utils module
---------------------------

.. automodule:: twittersearch.utils
.. automodule:: searchtweets.utils
:members:
:undoc-members:
:show-inheritance:
Expand All @@ -32,7 +32,7 @@ twittersearch\.utils module
Module contents
---------------

.. automodule:: twittersearch
.. automodule:: searchtweets
:members:
:undoc-members:
:show-inheritance:
4 changes: 2 additions & 2 deletions examples/api_example.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@
},
"outputs": [],
"source": [
"from twittersearch import ResultStream, gen_rule_payload, load_credentials"
"from searchtweets import ResultStream, gen_rule_payload, load_credentials"
]
},
{
Expand Down Expand Up @@ -149,7 +149,7 @@
},
"outputs": [],
"source": [
"from twittersearch import collect_results"
"from searchtweets import collect_results"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/readme.rst
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ throughout your program's session.

.. code:: python
from twittersearch import ResultStream, gen_rule_payload, load_credentials
from searchtweets import ResultStream, gen_rule_payload, load_credentials
Enterprise setup
----------------
Expand Down Expand Up @@ -117,7 +117,7 @@ Let's see how it goes:

.. code:: python
from twittersearch import collect_results
from searchtweets import collect_results
.. code:: python
Expand Down
File renamed without changes.
23 changes: 12 additions & 11 deletions twittersearch/api_utils.py → searchtweets/api_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
import json

__all__ = ["gen_rule_payload", "gen_params_from_config", "load_credentials",
"infer_endpoint",
"infer_endpoint", "convert_utc_time",
"validate_count_api", "GNIP_RESP_CODES", "change_to_count_endpoint"]

logger = logging.getLogger(__name__)
Expand Down Expand Up @@ -65,7 +65,7 @@ def convert_utc_time(datetime_str):
string of GNIP API formatted date.
Example:
>>> from twittersearch.utils import convert_utc_time
>>> from searchtweets.utils import convert_utc_time
>>> convert_utc_time("201708020000")
'201708020000'
>>> convert_utc_time("2017-08-02")
Expand Down Expand Up @@ -136,7 +136,7 @@ def gen_rule_payload(pt_rule, results_per_call=500,
Example:
>>> from twittersearch.utils import gen_rule_payload
>>> from searchtweets.utils import gen_rule_payload
>>> gen_rule_payload("kanye west has:geo",
... from_date="2017-08-21",
... to_date="2017-08-22")
Expand Down Expand Up @@ -221,12 +221,12 @@ def validate_count_api(rule_payload, endpoint):

def load_credentials(filename=None, account_type=None):
"""
handlles credeintial managmenet via a YAML file. YAML files should look
Handles credeintial managmenet via a YAML file. YAML files should look
like this:
.. code:: yaml
twitter_search_api:
search_tweets_api:
endpoint: <FULL_URL_OF_ENDPOINT>
account: <ACCOUNT_NAME>
username: <USERNAME>
Expand All @@ -240,21 +240,22 @@ def load_credentials(filename=None, account_type=None):
default '~/.twitter_keys.yaml'
account_type (str): pass your account type, "premium" or "enterprise"
Returns: dict of your access credentials.
Returns:
dict of your access credentials.
Example:
>>> from twittersearch.api_utils import load_credentials
>>> search_args = load_credentials(account_type="premium")
>>> search_args.keys()
dict_keys(['bearer_token', 'endpoint'])
>>> from searchtweets.api_utils import load_credentials
>>> search_args = load_credentials(account_type="premium")
>>> search_args.keys()
dict_keys(['bearer_token', 'endpoint'])
"""
if account_type is None or account_type not in {"premium", "enterprise"}:
logger.error("You must provide either 'premium' or 'enterprise' here")
raise KeyError
filename = "~/.twitter_keys.yaml" if filename is None else filename
with open(os.path.expanduser(filename)) as f:
search_creds = yaml.load(f)["twitter_search_api"]
search_creds = yaml.load(f)["search_tweets_api"]

try:

Expand Down
13 changes: 8 additions & 5 deletions twittersearch/result_stream.py → searchtweets/result_stream.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,11 @@
import json
from tweet_parser.tweet import Tweet

from .utils import *
from .api_utils import *
from .utils import merge_dicts

from .api_utils import (infer_endpoint, GNIP_RESP_CODES,
change_to_count_endpoint)


logger = logging.getLogger(__name__)

Expand All @@ -30,7 +33,7 @@ def make_session(username=None, password=None, bearer_token=None):
Args:
username (str): username for the session
password (str): password for the user
bearer_token (str): token for the session for freemium.
bearer_token (str): token for a premium API user.
"""

if password is None and bearer_token is None:
Expand Down Expand Up @@ -226,7 +229,7 @@ def init_session(self):

def check_counts(self):
"""
Disables tweet parsing if the count api is used.
Disables tweet parsing if the count API is used.
"""
if "counts" in re.split("[/.]", self.endpoint):
logger.info("disabling tweet parsing due to counts api usage")
Expand Down Expand Up @@ -278,7 +281,7 @@ def collect_results(rule, max_results=500, result_stream_args=None):
list of results
Example:
>>> from twittersearch import collect_results
>>> from searchtweets import collect_results
>>> tweets = collect_results(rule,
max_results=500,
result_stream_args=search_args)
Expand Down
Loading

0 comments on commit fe7445c

Please sign in to comment.