Skip to content

OS error #3

@DrEdgarRR

Description

@DrEdgarRR

I am getting this error. I don't no why

OSError Traceback (most recent call last)
Input In [13], in <cell line: 12>()
10 import re
11 import spacy
---> 12 nlp = spacy.load('en_core_web_lg')

File ~\anaconda3\lib\site-packages\spacy_init_.py:54, in load(name, vocab, disable, enable, exclude, config)
30 def load(
31 name: Union[str, Path],
32 *,
(...)
37 config: Union[Dict[str, Any], Config] = util.SimpleFrozenDict(),
38 ) -> Language:
39 """Load a spaCy model from an installed package or a local path.
40
41 name (str): Package name or model path.
(...)
52 RETURNS (Language): The loaded nlp object.
53 """
---> 54 return util.load_model(
55 name,
56 vocab=vocab,
57 disable=disable,
58 enable=enable,
59 exclude=exclude,
60 config=config,
61 )

File ~\anaconda3\lib\site-packages\spacy\util.py:439, in load_model(name, vocab, disable, enable, exclude, config)
437 if name in OLD_MODEL_SHORTCUTS:
438 raise IOError(Errors.E941.format(name=name, full=OLD_MODEL_SHORTCUTS[name])) # type: ignore[index]
--> 439 raise IOError(Errors.E050.format(name=name))

OSError: [E050] Can't find model 'en_core_web_lg'. It doesn't seem to be a Python package or a valid path to a data directory.

Maybe, as a consequence, i m getting another error:

AttributeError Traceback (most recent call last)
Input In [14], in <cell line: 1>()
----> 1 from twitterscraper import query_tweets
2 from twitterscraper.query import query_tweets_from_user
3 import datetime as dt

File ~\anaconda3\lib\site-packages\twitterscraper_init_.py:13, in
9 author = 'Ahmet Taspinar'
10 license = 'MIT'
---> 13 from twitterscraper.query import query_tweets
14 from twitterscraper.query import query_tweets_from_user
15 from twitterscraper.query import query_user_info

File ~\anaconda3\lib\site-packages\twitterscraper\query.py:76, in
73 for i in range(n):
74 yield start + h * i
---> 76 proxies = get_proxies()
77 proxy_pool = cycle(proxies)
79 def query_single_page(query, lang, pos, retry=50, from_user=False, timeout=60, use_proxy=True):

File ~\anaconda3\lib\site-packages\twitterscraper\query.py:49, in get_proxies()
47 soup = BeautifulSoup(response.text, 'lxml')
48 table = soup.find('table',id='proxylisttable')
---> 49 list_tr = table.find_all('tr')
50 list_td = [elem.find_all('td') for elem in list_tr]
51 list_td = list(filter(None, list_td))

AttributeError: 'NoneType' object has no attribute 'find_all'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions