The while loop in info is.. interesting. It uses a class attribute as a loop counter, which is usuallyactually seems to cause a code smell in itselfpotential bug here: the counter persists, so on subsequent calls you will skip the loop, never append to the outer list, and it will be returned empty. As isIf usingpage is a local variable instead, and it will work fine no matter how many calls you make.
 Instead of keeping a loop counter like this in the first place -, and incrementing it would be more Pythonicmanually, preferred Python style is to douse a for page in range(...): loop. Here, it looks like you're using an attribute as the counterThe equivalent for loop to skip theyour current while loop if you've already gotten the info.. but you don't remember the list, so on subsequent calls you'll return an empty list, which seems like a bug.would be:
for page in range(1, 2):
   ...
 IAlthough it would capture links as an attribute (and not page, which should be good to store the maximum page in a local variable). There are better ways to test if it hasn't been initialised yet (set it to NoneBut in light of your clarification in the comments that this would normally be a __init__while True: is the most common)loop, but here I wouldn't bother - just buildwhich only breaks once it hits a page with no interesting links, then the equivalent is to use in __init__.itertools from the stdlib:
for page in it.count(1):
    ...
 You have a variable links to capture the links on each iteration of the loop, and linkz to accumulate them across all the pages you parse (which is only one at the moment anyway, but I'm assuming that could change later). Those names are quite confusing; it would be good to differentiate better - call them links for the one that gathers all the links, and page_links for the one that gets the links for just this page.
 SoYour email function parses what I would call your 'main' data structure. Since that is a lot more than just email-related information, email isn't the best name for it will look like this.
 The data structure that email creates seems awkward for your data. If you think of people as 'records', and the information you capture about them to be 'fields', then it would make more sense to have it as a list of namedtuples, or or even a pandas dataframe if you're working with a lot of data. That way, your code to parse it is a little bit simpler:
def __init__(self, url, filename="default"):
    # Other __init__ code
    pages = 2
    links = []
    for page in range(1, pages):
        contentperson = requests.getnamedtuple(self.url.format(self.page)).content
        souppeople = BeautifulSoup(content)[]
        page_links = [i["href"] for ilink in soup.find_all("a",{"itemprop"links:"name"})]
      this_person = person()
      person.phone = if#parse notphone page_links:number
            breaketc.
       return links.extend(page_links)people
 I've called it links rather than self.links because of something I'm about to get to.
 Your email functionBut this is strange toothe major data structure in your class. It does more parsing of these results, into exactlycontains all the things I would expectinformation that this kind of class is designed to be capturing - and more than just the email addresses its name would implyparse. So, I think all of this should also go intobe done in __init__, and the stuff that it makes should be attributes.
 The datawith this structure here is odd too - it looks like you handle each person having multiple phone numbers or emails, but then you bundle all of the phone numbers into one flat list and all of the email addresses into another. You completely fail to capture which phone numbers belong to the same person, much less being able to cross-correlate which email addresses also belong to those same people. Instead, capture a collection of people, each of whom have a name, a list of email addresses, a list of phone numbers, etc. You could doonly this as a list of namedtuples, or even a pandas dataframe if you're working with a lot of data. So your __init__ parses everything into whichever of those you choose, and storesstructure just that- captured as an attribute.
 The only other thingattribute you would need to rememberstore is that filenamethe filename that gets passed in. But that attribute is only related to where you write the data to disk__init__. So, thatI think it would make more sense as an argument to your csv method - the only method that uses it - because the filename to write it to is a decision that goes with "I want to write this data to disk" more than "I want to scrape some data from a website".
 So, you're left with a class with one attribute, and two methods - one of which is __init__, and the other is fairly trivial. In fact, csv becomes even more trivial with a list of namedtuples or a DataFrame: the list of namedtuples could just be passed directly, since you wouldn't need to writerowszip as-is (no need to zip it), and. Similarly if you use a pandas DataFrame, it has itsit's own CSV routines that you could delegate tocan use. 
 That means there is no need for a class. Instead, use what your __init__ has grown to as a standalone function - call it parse_yellowpages(url). Have it return the list or DataFrame. You don't really need aThen drop the csv function forentirely and just put that in the three linesmain line of your program.
Your current code is a little schizophrenic about whether it allows multi-valued fields: eg, can a person have more than one phone number?
 The code parsing it looks like it does. You parse out a list of data, then you extend it onto the accumulated list. But then when you write out your CSV writing, especially since that'syou assume (by your use of zip to regroup them) that all fields have exactly one value.
In the comments you clarified that you only want single-valued fields. That being the case, don't include a full list of every match you found in your data structure. If you get three phone numbers for a person when you're only expecting one, you want to either consider it an error and bail out, or forget all but one of them. Your current code would silently remember all of them, forget that it did that, and end up with corrupt data.
To take the 'signal an error' path, do something like this script does:
phone_numbers = soup2.findall(...)
if len(phone_numbers) != 1:
    raise RuntimeError('Empty or multivalued field: ' + url)
else:
   this_person.phone = phone_numbers[0]
 If you prefer to instead ignore any surplus values, use find instead of find_all - they take the same arguments, but find will just return one result instead of a list of them.