CEH v11 Module 2 Reconnaissance & Footprinting 实验记录
作者:互联网
CEH v11 Module 2 实验记录
1. Scenario
Reconnaissance refers to collecting information about a target, which is the first step in any attack on a system. It has its roots in military operations, where the term refers to the mission of collecting information about an enemy. Reconnaissance helps attackers narrow down the scope of their efforts and aids in the selection of weapons of attack. Attackers use the gathered information to create a blueprint, or “footprint,” of the organization, which helps them select the most effective strategy to compromise the system and network security.
Similarly, the security assessment of a system or network starts with the reconnaissance and footprinting of the target. Ethical hackers and penetration (pen) testers must collect enough information about the target of the evaluation before initiating assessments. Ethical hackers and pen testers should simulate(模仿) all the steps that an attacker usually follows to obtain a fair idea of the security posture of the target organization. In this scenario, you work as an ethical hacker with a large organization. Your organization is alarmed at the news stories concerning new attack vectors plaguing(使感染) large organizations around the world. Furthermore, your organization was the target of a major security breach in the past where the personal data of several of its customers were exposed to social networking sites.
You have been asked by senior managers to perform a proactive security assessment of the company. Before you can start any assessment, you should discuss and define the scope with management; the scope of the assessment identifies the systems, network, policies and procedures, human resources, and any other component of the system that requires security evaluation. You should also agree with management on rules of engagement (RoE)—the “do’s and don’ts” of assessment. Once you have the necessary approvals to perform ethical hacking, you should start gathering information about the target organization. Once you methodologically begin the footprinting process, you will obtain a blueprint of the security profile of the target organization. The term “blueprint” refers to the unique system profile of the target organization as the result of footprinting.
The labs in this module will give you real-time experience in collecting a variety of information about the target organization from various open or publicly accessible sources.
2. Objectives
The objective of the lab is to extract information about the target organization that includes, but is not limited to:
- Organization Information
Employee details, partner details, weblinks, web technologies, patents(专利权), trademarks, etc. - Network Information
Domains, sub-domains, network blocks, network topologies, trusted routers, firewalls, IP addresses of the reachable systems, the Whois record, DNS records, and other related information - System Information
Operating systems, web server OSes, user accounts and passwords, etc.
3. Overview of Footprinting
Footprinting refers to the process of collecting information about a target network and its environment, which helps in evaluating the security posture of the target organization’s IT infrastructure. It also helps to identify the level of risk associated with the organization’s publicly accessible information.
Footprinting can be categorized into passive footprinting and active footprinting:
- Passive Footprinting:
Involves gathering information without direct interaction. This type of footprinting is principally useful when there is a requirement that the information-gathering activities are not to be detected by the target. - Active Footprinting:
Involves gathering information with direct interaction. In active footprinting, the target may recognize the ongoing information gathering process, as we overtly interact with the target network.
4. Lab Tasks
Ethical hackers or pen testers use numerous tools and techniques to collect information about the target. Recommended labs that will assist you in learning various footprinting techniques include:
4.1. Perform footprinting through search engines
Search engines use crawlers, automated software that continuously scans active websites, and add the retrieved results to the search engine index, which is further stored in a huge database. When a user queries a search engine index, it returns a list of Search Engine Results Pages (SERPs). These results include web pages, videos, images, and many different file types ranked and displayed based on their relevance. Examples of major search engines include Google, Bing, Yahoo, Ask, Aol, Baidu, WolframAlpha, and DuckDuckGo.
a. Gather information using advanced Google hacking techniques
Advanced Google hacking refers to the art of creating complex search engine queries by employing advanced Google operators to extract sensitive or hidden information about a target company from the Google search results. This can provide information about websites that are vulnerable to exploitation. Note: Here, we will consider EC-Council as a target organization.
Launch the browser, and type following google query in the search bar:
intitle:password site:www.eccouncil.com
And the page contains results shows as below:
Then we can search pdf files related to EC-Council by typing the command bellow:
EC-Council filetype:pdf
Again, the result page appears:
Apart from the aforementioned advanced Google operators, you can also use the following to perform an advanced search to gather more information about the target organization from publicly available sources.
-
allinurl
This operator restricts results to only the pages containing all the query terms specified in the URL. For example, the query of ‘allinurl: google career’ returns only pages containing the words “google” and “career” in the URL. -
inurl
This operator restricts the results to only the pages containing the specified word in the URL. For example, the query of ‘inurl: copy site:www.google.com’ returns only Google pages in which the URL has the word “copy.” -
allintitle
This operator restricts results to only the pages containing all the query terms specified in the title. For example, the query of ‘allintitle: detect malware’ returns only pages containing the words “detect” and “malware” in the title. -
intitle
This operator restricts results to only the pages containing the specified term in the title. For example, the query of ‘malware detection intitle:help’ query returns only pages that have the term “help” in the title, and the terms “malware” and “detection” anywhere within the page. -
inanchor
This operator restricts results to only the pages containing the query terms specified in the anchor text on links to the page. For example, the query of ‘Anti-virus inanchor:Norton’ query returns only pages with anchor text on links to the pages containing the word “Norton” and the page containing the word “Anti-virus.” -
allinanchor
This operator restricts results to only the pages containing all query terms specified in the anchor text on links to the pages. For example, the query of ‘allinanchor: best cloud service provider’ returns only pages for which the anchor text on links to the pages contains the words “best,” “cloud,” “service,” and “provider.” -
cache
This operator displays Google’s cached version of a web page instead of the current version of the web page. For example, the query of ‘cache:www.eff.org’ will show Google’s cached version of the Electronic Frontier Foundation home page. -
link
This operator searches websites or pages that contain links to the specified website or page. For example, the query of ‘link:www.googleguide.com’ finds pages that point to Google Guide’s home page. Note: According to Google’s documentation, “you cannot combine a link: search with a regular keyword search.” Also note that when you combine link: with another advanced operator, Google may not return all the pages that match. -
related
This operator displays websites that are similar or related to the URL specified. For example, the query of ‘related:www.microsoft.com’ provides the Google search engine results page with websites similar to microsoft.com. -
info
This operator finds information for the specified web page. For example, the query of ‘info:gothotel.com’ provides information about the national hotel directory GotHotel.com home page. -
location
This operator finds information for a specific location. For example, the query of ‘location: 4 seasons restaurant’ will give you results based on the term “4 seasons restaurant.”
This concludes the demonstration of gathering information using advanced Google hacking techniques. You can conduct a series of queries on your own by using these advanced Google operators and gather the relevant information about the target organization.
Then close all open windows and document all the acquired information.
b. Gather information from video search engines
Video search engines are Internet-based search engines that crawl the web looking for video content. These search engines either provide the functionality of uploading and hosting the video content on their own web servers or they can parse(解析) the video content, which is hosted externally. Here, we will perform an advanced video search and reverse image search using the YouTube search engine and Youtube DataViewer video analysis tool.
Lauch the browser and enter ‘www.youtube.com’ in the search bar. Then you can see the page of Youtube.
In the search field, search for your target organization (here, ec-council). You will see all the latest videos uploaded by the target organization.
Select any video of your choice, right-click on the video title, and click Copy Link Location
After the video link is copied, open a new tab in Mozilla Firefox, place your mouse cursor in the address bar, and type the link ‘https://citizenevidence.amnestyusa.org/’ and press Enter.
Extract Meta Data page appears, in the Enter YouTube URL search field, paste the copied YouTube video location and click Go. In the search result, you can observe the details related to the video such as Video ID, Upload Date, Upload Time, etc. You can also find Thumbnails(缩略图) to perform a reverse image search.
In the search result, you can observe the details related to the video such as Video ID, Upload Date, Upload Time, etc. You can also find Thumbnails to perform a reverse image search. A new tab in Google appears, and the results for the reverse image search are displayed.
This concludes the demonstration of gathering information from the advanced video search and reverse image search using the YouTube search engine and Youtube DataViewer video analysis tool. You can use other video search engines such as Google videos (https://video.google.com), Yahoo videos (https://video.search.yahoo.com), etc.; video analysis tools such as EZGif (https://ezgif.com), VideoReverser.com, etc.; and reverse image search tools such as TinEye Reverse Image Search (https://tineye.com), Yahoo Image Search (https://images.search.yahoo.com), etc. to gather crucial information about the target organization.
Then close all open windows and document all the acquired information.
c. Gather information from FTP search engines
File Transfer Protocol (FTP) search engines are used to search for files located on the FTP servers; these files may hold valuable information about the target organization. Many industries, institutions, companies, and universities use FTP servers to keep large file archives and other software that are shared among their employees. FTP search engines provide information about critical files and directories, including valuable information such as business strategies, tax documents, employee’s personal records, financial records, licensed software, and other confidential information.
Here, we will use the NAPALM FTP indexer FTP search engine to extract critical FTP information about the target organization.
Launch the browser and enter the link ‘https://www.searchftps.net/’ and press enter. NAPALM FTP indexer website appears, as shown in the screenshot.
In the search bar, type microsoft and click Search. You will get the search results with the details of the FTP in the target organization, as shown in the screenshot.
This concludes the demonstration of gathering information from the FTP search engine. You can also use FTP search engines such as Global FTP Search Engine (https://globalfilesearch.com), FreewareWeb FTP File Search (http://www.freewareweb.com), etc. to gather crucial FTP information about the target organization.
Close all open windows and document all the acquired information.
d. Gather information from IoT search engines
IoT search engines crawl the Internet for IoT devices that are publicly accessible. These search engines provide crucial information, including control of SCADA (Supervisory Control and Data Acquisition) systems, traffic control systems, Internet-connected household appliances, industrial appliances, CCTV cameras, etc.
Here, we will search for information about any vulnerable IoT device in the target organization using the Shodan IoT search engine.
Launch the browser and enter the link ‘https://www.shodan.io/’ and press enter. Shodan page appears, as shown in the screenshot.
In the search bar, type sina (新浪) and press Enter. You will obtain the search results with the details of all the vulnerable IoT devices related to sina in various countries, as shown in the screenshot.
This concludes the demonstration of gathering vulnerable IoT information using the Shodan search engine. You can also use Censys (https://censys.io), Thingful (https://www.thingful.net), etc., which are IoT search engines, to gather information such as manufacturer details, geographical location, IP address, hostname, open ports, etc.
Close all open windows and document all the acquired information.
4.2. Perform Footprinting Through Web Services
Web services such as social networking sites, people search services, alerting services, financial services, and job sites, provide information about a target organization; for example, infrastructure details, physical location, employee details, etc. Moreover, groups, forums, and blogs may provide sensitive information about a target organization such as public network information, system information, and personal information. Internet archives may provide sensitive information that has been removed from the World Wide Web (WWW).
a. Find the Company’s Domains and Sub-domains using Netcraft
Domains and sub-domains are part of critical network infrastructure for any organization. A company’s top-level domains (TLDs) and sub-domains can provide much useful information such as organizational history, services and products, and contact information. A public website is designed to show the presence of an organization on the Internet, and is available for free access.
Here, we will extract the company’s domains and sub-domains using the Netcraft web service.
Launch the browser and enter the link ‘https://www.netcraft.com’ and press enter. Netcraft page appears, as shown in the screenshot.
Click the Resources tab from the menu bar and click on the Site Report link under the Tools section. The What’s that site running? page appears. To extract information associated with the organizational website such as infrastructure, technology used, sub domains, background, network, etc., type the target website’s URL (here, https://www.eccouncil.org) in the text field, and then click the Lookup button, as shown in the screenshot.
The Site report for https://www.eccouncil.org page appears, containing information related to Background, Network, Hosting History, etc., as shown in the screenshot.
In the Network section, click on the website link (here, eccouncil.org) in the Domain field to view the subdomains. The result will display subdomains of the target website along with netblock and operating system information, as shown in the screenshot.
This concludes the demonstration of finding the company’s domains and sub-domains using the Netcraft tool. You can also use tools such as Sublist3r (https://github.com), Pentest-Tools Find Subdomains (https://pentest-tools.com), etc. to identify the domains and sub-domains of any target website.
Close all open windows and document all the acquired information.
b. Gather Personal Information using PeekYou Online People Search Service
Online people search services, also called public record websites, are used by many individuals to find personal information about others; these services provide names, addresses, contact details, date of birth, photographs, videos, profession, details about family and friends, social networking profiles, property information, and optional background on criminal checks.
Here, we will gather information about a person from the target organization by performing people search using the PeekYou online people search service.
Launch the browser and enter the link ‘https://www.peekyou.com’ and press enter. PeekYou page appears, as shown in the screenshot.
In the First Name and Last Name fields, type Satya and Nadella, respectively. In the Location drop-down box, select Washington, DC. Then, click the Search icon. The people search begins, and the best matches for the provided search parameters will be displayed.
You can further click on the appropriate result to view the detailed information about the target person to see a detailed information about the target person. Scroll down to view the entire information about the target person.
This concludes the demonstration of gathering personal information using the PeekYou online people search service. You can also use pipl (https://pipl.com), Intelius (https://www.intelius.com), BeenVerified (https://www.beenverified.com), etc., which are people search services to gather personal information of key employees in the target organization.
Close all open windows and document all the acquired information.
c. Gather an Email List using theHarvester
Emails are messaging sources that are crucial for performing information exchange. Email ID is considered by most people as the personal identification of employees or organizations. Thus, gathering the email IDs of critical personnel is one of the key tasks of ethical hackers.
Here, we will gather the list of email IDs related to a target organization using theHarvester tool.
theHarvester: This tool gathers emails, subdomains, hosts, employee names, open ports, and banners from different public sources such as search engines, PGP key servers, and the SHODAN computer database as well as uses Google, Bing, SHODAN, etc. to extract valuable information from the target domain. This tool is intended to help ethical hackers and pen testers in the early stages of the security assessment to understand the organization’s footprint on the Internet. It is also useful for anyone who wants to know what organizational information is visible to an attacker.
In the Terminal window of Kali, enter the command
theHarvester -d microsoft.com -l 200 -b baidu
Where -d specifies the domain or company name to search, -l specifies the number of results to be retrieved, and -b specifies the data source.
theHarvester starts extracting the details and displays them on the screen. You can see the email IDs related to the target company and target company hosts obtained from the Baidu source, as shown in the screenshot.
Here, we specify Baidu search engine as a data source. You can specify different data sources (e.g., Baidu, bing, bingapi, dogpile, Google, GoogleCSE, Googleplus, Google-profiles, linkedin, pgp, twitter, vhost, virustotal, threatcrowd, crtsh, netcraft, yahoo, all) to gather information about the target.
This concludes the demonstration of gathering an email list using theHarvester.
Close all open windows and document all the acquired information.
d. Gather Information using Deep and Dark Web Searching
The deep web consists of web pages and content that are hidden and unindexed and cannot be located using a traditional web browser and search engines. It can be accessed by search engines such as Tor Browser and The WWW Virtual Library. The dark web or dark net is a subset of the deep web, where anyone can navigate anonymously without being traced. Deep and dark web search can provide critical information such as credit card details, passports information, identification card details, medical records, social media accounts, Social Security Numbers (SSNs), etc.
Here, we will understand the difference between surface web search and dark web search using Mozilla Firefox and Tor Browser.
Launch Tor browser. The Connect to Tor window appears. Click the Connect button to directly browse through Tor Browser’s default settings. If Tor is censored in your country or if you want to connect through Proxy, click the Configure button and continue.
After a few seconds, the Tor Browser home page appears. The main advantage of Tor Browser is that it maintains the anonymity of the user throughout the session.
To understand surface web searching, first, minimize Tor Browser and open Google Chrome. Navigate to www.bing.com; in the Bing search bar, search for information related to hacker for hire. You will be presented with much irrelevant data, as shown in the screenshot.
Now switch to Tor Browser and search for the same (i.e., hacker for hire). You will find the relevant links related to the professional hackers who operate underground through the dark web. Tor uses the DuckDuckGo search engine to perform a dark web search. The results may vary in your environment. Now, click on the toggle button that specifies the country of VPN/Proxy (here, by default, Germany is selected) and select a relevant country (here, Australia). Search results for hacker for hire will be loaded, as shown in the screenshot. Click to open any of the search results (here, https://ihirehacker.com). The https://ihirehacker.com webpage opens up, as shown in the screenshot. You can see that the site belongs to professional hackers who operate underground. ihirehacker is an example. These search results will help you in identifying professional hackers. However, as an ethical hacker, you can gather critical and sensitive information about your target organization using deep and dark web search.
You can also anonymously explore the following onion sites using Tor Brower to gather other relevant information about the target organization:
- The Hidden Wiki is an onion site that works as a Wikipedia service of hidden websites
- FakeID is an onion site for creating fake passports
- The Paypal Cent is an onion site that sells PayPal accounts with good balances
You can also use tools such as
To perform deep and dark web browsing.
This concludes the demonstration of gathering information using deep and dark web searching using Tor Browser. Close all open windows and document all the acquired information.
e. Determine Target OS Through Passive Footprinting
Operating system information is crucial for every ethical hacker. Ethical hackers can acquire details of the operating system running on the target machine by performing various passive footprinting techniques.
Here, we will gather target OS information through passive footprinting using the Censys web service.
Launch the browser and enter the link ‘https://censys.io/domain?q=’ and press enter. Censys page appears, as shown in the screenshot. In the Websites search bar, type the target website (here, eccouncil.org) and press Enter. From the results, click any result (here, eccouncil.org) from which you want to gather the OS details. The eccouncil.org page appears, as shown in the screenshot. Under the Basic Information section, you can observe that the OS is Windows. Apart from this, you can also observe that the Server on which the HTTP is running is cloudflare.
This concludes the demonstration of gathering OS information through passive footprinting using the Censys web service. You can also use webservices such as Netcraft (https://www.netcraft.com), Shodan (https://www.shodan.io), etc. to gather OS information of target organization through passive footprinting.
Close all open windows and document all the acquired information.
4.3. Perform Footprinting Through Social Networking Sites
Social networking sites are online services, platforms, or other sites that allow people to connect and build interpersonal relations. People usually maintain profiles on social networking sites to provide basic information about themselves and to help make and maintain connections with others; the profile generally contains information such as name, contact information (cellphone number, email address), friends’ information, information about family members, their interests, activities, etc. On social networking sites, people may also post their personal information such as date of birth, educational information, employment background, spouse’s names, etc. Organizations often post information such as potential partners, websites, and upcoming news about the company. Thus, social networking sites often prove to be valuable information resources. Examples of such sites include LinkedIn, Facebook, Instagram, Twitter, Pinterest, YouTube, etc.
a. Gather Employees’ Information from LinkedIn using theHarvester
LinkedIn is a social networking website for industry professionals. It connects the world’s human resources to aid productivity and success. The site contains personal information such as name, position, organization name, current location, educational qualifications, etc.
Here, we will gather information about the employees (name and job title) of a target organization that is available on LinkedIn using theHarvester tool (As it is not available for the lab environment, we operate it using Bing instead of LinkedIn).
In the Terminal window of Kali, enter the command
theHarvester -d utas -l 200 -b bing
Where -d specifies the domain or company name to search, -l specifies the number of results to be retrieved, and -b specifies the data source as LinkedIn.
theHavest shows results of UTAS from the Bing source.
This concludes the demonstration of gathering employees’ information from LinkedIn using theHarvester. Close all open windows and document all the acquired information.
b. Gather Personal Information from Various Social Networking Sites using Sherlock
Sherlock is a python-based tool that is used to gather information about a target person over various social networking sites. Sherlock searches a vast number of social networking sites for a given target user, locates the person, and displays the results along with the complete URL related to the target person.
In Terminal Window of Parrot Security, switch the account to root, and enter the directory sherlock/sherlock/, and then type the command
python3 sherlock.py satya nadella
and press Enter. You will get all the URLs related to Satya Nadella, as shown in the screenshot. Scroll down to view all the results.
This concludes the demonstration of gathering person information from various social networking sites using Sherlock. You can also use tools such as Social Searcher (https://www.social-searcher.com), UserRecon (https://github.com), etc. to gather additional information related to the target company and its employees from social networking sites.
Close all open windows and document all the acquired information.
c. Gather Information using Followerwonk
Followerwonk is an online tool that helps you explore and grow your social graph, digging deeper into Twitter analytics; for example, Who are your followers? Where are they located? When do they tweet? This can be used to gather Twitter information about any target organization or individual.
Open any web browser (here, Mozilla Firefox). In the address bar of the browser place your mouse cursor, type ‘https://followerwonk.com/analyze’ and press enter. Followerwonk website appears, as shown in the screenshot.
n the Screen Name search bar, type your target individual’s twitter tag (here, @satyanadella) and click the Do it button to analyze the users whom the target person follows. The results regarding the target appears, as shown in the screenshot.
Scroll down to view the detailed analysis. This concludes the demonstration of gathering information using Followerwonk. You can also use Hootsuite, Sysomos, etc. to gather additional information related to the target company and its employees from social networking sites.
Close all open windows and document all the acquired information.
4.4. Perform Website Footprinting
Website footprinting is a technique used to collect information regarding the target organization’s website. Website footprinting can provide sensitive information associated with the website such as registered names and addresses of the domain owner, domain names, host of the sites, OS details, IP details, registrar details, emails, filenames, etc.
a. Gather Information About a Target Website using Ping Command Line Utility
Ping is a network administration utility used to test the reachability of a host on an IP network and measure the round-trip time for messages sent from the originating host to a destination computer. The ping command sends an ICMP echo request to the target host and waits for an ICMP response. During this request-response process, ping measures the time from transmission to reception, known as round-trip time, and records any loss of packets. The ping command assists in obtaining domain information and the IP address of the target website.
Lauch Command Prompt in Windows, and enter the command
ping www.certifiedhacker.com
Note the target domain’s IP address in the result above (here, 162.241.216.11). You also obtain information on Ping Statistics such as packets sent, packets received, packets lost, and approximate round-trip time.
In the Command Prompt window, type
ping www.certifiedhacker.com -f -l 1500
and press Enter. The response, Packet needs to be fragmented but DF set, means that the frame is too large to be on the network and needs to be fragmented. The packet was not sent as we used the -f switch with the ping command, and the ping command returned this error.
In the Command Prompt window, type
ping www.certifiedhacker.com -f -l 1300
and press Enter. Observe that the maximum packet size is less than 1500 bytes and more than 1300 bytes.
Now, try different values until you find the maximum frame size. For instance, ping www.certifiedhacker.com -f -l 1473 replies with Packet needs to be fragmented but DF set, and ping www.certifiedhacker.com -f -l 1472 replies with a successful ping. It indicates that 1472 bytes are the maximum frame size on this machine’s network.
Now, discover what happens when TTL (Time to Live) expires. Every frame on the network has TTL defined. If TTL reaches 0, the router discards the packet. This mechanism prevents the loss of packets.
In Command Prompt, type
ping www.certifiedhacker.com -i 3
and press Enter. This option sets the time to live (-i) value as 3. Reply from 192.168.100.6: TTL expired in transit means that the router (192.168.100.6, you will have some other IP address) discarded the frame because its TTL has expired (reached 0).
Now, change the time to live value to 4 by typing
ping www.certifiedhacker.com -i 4 -n 1
and press Enter.
Repeat the above step until you reach the IP address for www.certifiedhacker.com (in this case, 162.241.216.11). Find the hop value by trying different TTL value to reach www.certifiedhacker.com. On successfully finding the TTL value it will imply that the reply is received from the destination host (162.241.216.11).
This concludes the demonstration of gathering information about a target website using Ping command-line utility (such as the IP address of the target website, hop count to the target, and value of maximum frame size allowed on the target network).
Close all open windows and document all the acquired information.
b. Gather Information About a Target Website using Website Informer
Website Informer is an online tool that gathers detailed information on a website such as a website’s traffic rank, daily visitors rate, page views, etc. Website Informer discovers the main competitors of the website, reveals DNS servers used by the website, and also obtains the Whois record of the target website.
Open any web browser (here, Mozilla Firefox). In the address bar of the browser place your mouse cursor, type ‘https://website.informer.com’ and Enter. The Website Informer website appears, as shown in the screenshot.
To extract information associated with the target organization website, type the target website’s URL (here, www.certifiedhacker.com) in the text field, and then click on the Search button. A search result for WWW.CERTIFIEDHACKER.COM containing information such as General Info, Stats & Details, Whois, and IP Whois is displayed, as shown in the screenshot. In the General Info tab, information such as Created, Expires, Owner, Hosting company, Registrar, IPs, DNS, and Email associated with the target website is displayed as shown in the screenshot
Click on the Whois tab to view detailed Whois information about the target website, as shown in the screenshot.
Similarly, you can click on the Stats & Details and IP Whois tabs to view the detailed information of the target website. This concludes the demonstration of gathering information about a target website using the Website Informer online tool. You can also use tools such as Burp Suite, Zaproxy, etc. to perform website footprinting on a target website.
Close all open windows and document all the acquired information.
c. Extract a Company’s Data using Web Data Extractor
Launch Web Data Extractor. The Web Data Extractor main window appears. Click New to start a new session.
The Session settings window appears; type a URL (here, http://www.certifiedhacker.com) in the Starting URL field. Check all the options, as shown in the screenshot, and click OK.
Click Start to initiate the data extraction. Web Data Extractor will start collecting information (Session, Meta tags, Emails, Phones, Faxes, Merged list, URLs, and Inactive sites). Once the data extraction process is completed, an Information dialog box appears; click OK. View the extracted information by clicking the tabs. For example, select the Meta tags tab to view the URL, Title, Keywords, Description, Host, Domain, page size, etc.
To save the session, choose File and click Save session. Click the Meta tags tab, and then click the floppy(软盘) icon. An Information pop-up may appear with the message You cannot save more than 10 records in Demo Version; click OK. The Save Meta tags window appears. In the File name field, click on the folder icon, select the location where you want to save the file, choose File format, and click Save.
This concludes the demonstration of extracting a company’s data using the Web Data Extractor tool. You can also use other web spiders such as ParseHub, SpiderFoot, etc. to extract the target organization’s data.
Close all open windows and document all the acquired information.
d. Mirror a Target Website using HTTrack Web Site Copier
Website mirroring is the process of creating a replica or clone of the original website; this mirroring of the website helps you to footprint the web site thoroughly on your local system, and allows you to download a website to a local directory, analyze all directories, HTML, images, flash, videos, and other files from the server on your computer.
You can duplicate websites by using website mirroring tools such as HTTrack Web Site Copier. HTTrack is an offline browser utility that downloads a website from the Internet to a local directory, builds all directories recursively, and transfers HTML, images, and other files from the webserver to another computer.
Here, we will use the HTTrack Web Site Copier tool to mirror the entire website of the target organization, store it in the local system drive, and browse the local website to identify possible exploits and vulnerabilities.
Launch HTTrack Web Site Copier OK in the pop-up window, and then click Next > to create a New Project.
Enter the name of the project (here, Test Project) in the New project name: field. Select the Base path: to store the copied files; click Next >.
Enter a target URL (here, www.certifiedhacker.com) in the Web Addresses: (URL) field and click Set options…WinHTTrack window appears, click the Scan Rules tab and select the checkboxes for the file types as shown in the following screenshot; click OK.
Click the Next > button. By default, the radio button will be selected for Please adjust connection parameters if necessary, then press FINISH to launch the mirroring operation. Check Disconnect when finished and click Finish to start mirroring the website.
Site mirroring progress will be displayed. Once the site mirroring is completed, WinHTTrack displays the message Mirroring operation complete; click on Browse Mirrored Website. If the How do you want to open this file? pop up appears, select any web browser and click OK. The mirrored website for www.certifiedhacker.com launches. The URL displayed in the address bar indicates that the website’s image is stored on the local machine.
Analyze all directories, HTML, images, flash, videos, and other files available on the mirrored target website. You can also check for possible exploits and vulnerabilities. The site will work like a live hosted website. Once done with your analysis, close the browser window and click Finish on the WinHTTrack window to complete the process. Some websites are very large, and it might take a long time to mirror the complete site.
This concludes the demonstration of mirroring a target website using HTTrack Web Site Copier. You can also use other mirroring tools such as NCollector Studio, Cyotek WebCopy, etc. to mirror a target website.
Close all open windows and document all the acquired information.
e. Gather a Wordlist from the Target Website using CeWL
The words available on the target website may reveal critical information that can assist in performing further exploitation. CeWL is a ruby app that is used to spider a given target URL to a specified depth, optionally following external links, and returns a list of unique words that can be used for cracking passwords.
Launch terminal in Kali, and enter the command
cewl -d 2 -m 5 www.certifiedhacker.com
Where -d represents the depth to spider the website (here, 2) and -m represents minimum word length (here, 5). A unique wordlist from the target website is gathered, as shown in the screenshot.
Alternatively, this unique wordlist can be written directly to a text file. To do so, type
cewl -w wordlist.txt -d 2 -m 5 www.certifiedhacker.com
and press Enter. By default, the wordlist file gets saved in the root directory. Type pluma wordlist.txt and press Enter to view the extracted wordlist. The file containing a unique wordlist extracted from the target website opens, as shown in the screenshot.
This wordlist can be used further to perform brute-force attacks against the previously obtained emails of the target organization’s employees.
This concludes the demonstration of gathering wordlist from the target website using CeWL.
Close all open windows and document all the acquired information.
4.5. Perform Email Footprinting
E-mail footprinting, or tracking, is a method to monitor or spy on email delivered to the intended recipient. This kind of tracking is possible through digitally time-stamped records that reveal the time and date when the target receives and opens a specific email.
Email footprinting reveals information such as:.
- Recipient’s system IP address
- The GPS coordinates and map location of the recipient
- When an email message was received and read
- Type of server used by the recipient
- Operating system and browser information
- If a destructive email was sent
- The time spent reading the email
- Whether or not the recipient visited any links sent in the email
- PDFs and other types of attachments
- If messages were set to expire after a specified time
a. Gather Information about a Target by Tracing Emails using eMailTrackerPro
The email header is a crucial part of any email and it is considered a great source of information for any ethical hacker launching attacks against a target. An email header contains the details of the sender, routing information, addressing scheme, date, subject, recipient, etc. Additionally, the email header helps ethical hackers to trace the routing path taken by an email before delivering it to the recipient.
Here, we will gather information by analyzing the email header using eMailTrackerPro.
Lauch eMailTrackerPro. The eMailTrackerPro main window appears, as shown in the screenshot.
To trace email headers, click the My Trace Reports icon from the View section. (here, you will see the output report of the traced email header). Click the Trace Headers icon from the New Email Trace section to start the trace. A pop-up window will appear; select Trace an email I have received. Copy the email header from the suspicious email you wish to trace and paste it in the Email headers: field under Enter Details section.
For finding email headers, open any web browser and log in to any email account of your choice; from the email inbox, open the message you would like to view headers for.
In Gmail, find the email header by following the steps:
- Open an email; click the dots (More) icon arrow next to the Reply icon at the top-right corner of the message pane.
- Select Show original from the list.
- The Original Message window appears in a new browser tab with all the details about the email, including the email header
In Outlook, find the email header by following the steps:
- Double-click the email to open it in a new window
- Click the … (More actions) icon present at the right of the message-pane to open message options
- From the options, click View message details
- The message details window appears with all the details about the email, including the email header
Copy the entire email header text and paste it into the Email headers: field of eMailTrackerPro, and click Trace.
The My Trace Reports window opens. The email location will be traced in a Map (world map GUI). You can also view the summary by selecting Email Summary on the right-hand side of the window. The Table section right below the Map shows the entire hop in the route, with the IP and suspected locations for each hop.
To examine the report, click the View Report button above Map to view the complete trace report.
The complete report appears in the default browser. Expand each section to view detailed information.
This concludes the demonstration of gathering information through analysis of the email header using eMailTrackerPro. You can also use email tracking tools such as Infoga, Mailtrack, etc. to track an email and extract target information such as sender identity, mail server, sender’s IP address, location, etc.
Close all open windows and document all the acquired information.
4.6. Perform Whois footprinting
This lab focuses on how to perform a Whois lookup and analyze the results. Whois is a query and response protocol used for querying databases that store the registered users or assignees of an Internet resource such as a domain name, an IP address block, or an autonomous system. This protocol listens to requests on port 43 (TCP). Regional Internet Registries (RIRs) maintain Whois databases, and contains the personal information of domain owners. For each resource, the Whois database provides text records with information about the resource itself and relevant information of assignees, registrants, and administrative information (creation and expiration dates).
Perform Whois Lookup using DomainTools
Launch browser and enter ‘http://whois.domaintools.com’. The Whois Lookup website appears, as shown in the screenshot.
Now, in the Enter a domain or IP address… search bar, type www.certifiedhacker.com and click Search. This search result reveals the details associated with the URL entered, www.certifiedhacker.com, which includes organizational details such as registration details, name servers, IP address, location, etc., as shown in the screenshots.
This concludes the demonstration of gathering information about a target organization by performing the Whois lookup using DomainTools. You can also use other Whois lookup tools such as SmartWhois, Batch IP Converter, etc. to extract additional target Whois information.
Close all open windows and document all the acquired information.
4.7. Perform DNS Footprinting
DNS considered the intermediary source for any Internet communication. The primary function of DNS is to translate a domain name to IP address and vice-versa to enable human-machine-network-internet communications. Since each device has a unique IP address, it is hard for human beings to memorize all IP addresses of the required application. DNS helps in converting the IP address to a more easily understandable domain format, which eases the burden on human beings.
a. Gather DNS Information using nslookup Command Line Utility
nslookup is a network administration command-line utility, generally used for querying the DNS to obtain a domain name or IP address mapping or for any other specific DNS record. This utility is available both as a command-line utility and web application.
Here, we will perform DNS information gathering about target organizations using the nslookup command-line utility and NSLOOKUP web application.
Launch a Command Prompt in Windows, type nslookup and press Enter. In the nslookup interactive mode, type
set type=a
and press Enter. Setting the type as “a” configures nslookup to query for the IP address of a given domain. Type the target domain
www.certifiedhacker.com
and press Enter. This resolves the IP address and displays the result, as shown in the screenshot.
The first two lines in the result are: Server: dns.google and Address: 8.8.8.8. This specifies that the result was directed to the default server hosted on the local machine (Windows 10) that resolves your requested domain.
Thus, if the response is coming from your local machine’s server (Google), but not the server that legitimately hosts the domain www.certifiedhacker.com; it is considered to be a non-authoritative answer. Here, the IP address of the target domain www.certifiedhacker.com is 162.241.216.11.
Since the result returned is non-authoritative, you need to obtain the domain’s authoritative name server.
Type
set type=cname
and press Enter. The CNAME lookup is done directly against the domain’s authoritative name server and lists the CNAME records for a domain.
Type
certifiedhacker.com
and press Enter.
This returns the domain’s authoritative name server (ns1.bluehost.com), along with the mail server address (dnsadmin.box5331.bluehost.com), as shown in the screenshot.
Since you have obtained the authoritative name server, you will need to determine the IP address of the name server. Issue the command
set type=a
and press Enter. Type
ns1.bluehost.com
(or the primary name server that is displayed in your lab environment) and press Enter. This returns the IP address of the server, as shown in the screenshot.
The authoritative name server stores the records associated with the domain. So, if an attacker can determine the authoritative name server (primary name server) and obtain its associated IP address, he/she might attempt to exploit the server to perform attacks such as DoS, DDoS, URL Redirection, etc. You can also perform the same operations using the NSLOOKUP online tool. Conduct a series of queries and review the information to gain familiarity with the NSLOOKUP tool and gather information.
b. Gather DNS Information using Online Tool
Now, we will use an online tool NSLOOKUP to gather DNS information about the target domain. Open any web browser (here, Mozilla Firefox). In the address bar of the browser place your mouse cursor and enter ‘http://www.kloth.net/services/nslookup.php’. NSLOOKUP website appears, as shown in the screenshot.
Once the site opens, in the Domain: field, enter certifiedhacker.com. Set the Query: field to default [A (IPv4 address)] and click the Look it up button to review the results that are displayed.
In the Query: field, click the drop-down arrow and check the different options that are available, as shown in the screenshot. As you can see, there is an option for AAAA (IPv6 address); select that and click Look it up. Perform queries related to this, since there are attacks that are possible over IPv6 networks as well.
This concludes the demonstration of DNS information gathering using the nslookup command-line utility and NSLOOKUP online tool. You can also use DNS lookup tools such as Professional Toolset, DNS Records, etc. to extract additional target DNS information.
Close all open windows and document all the acquired information.
4.8. Perform Network Footprinting
Network footprinting is a process of accumulating data regarding a specific network environment. It enables ethical hackers to draw a network diagram and analyze the target network in more detail to perform advanced attacks.
a. Locate the Network Range
Network range information assists in creating a map of the target network. Using the network range, you can gather information about how the network is structured and which machines in the networks are alive. Further, it also helps to identify the network topology and access the control device and operating system used in the target network.
Here, we will locate the network range using the ARIN Whois database search tool.
Open any web browser (here, Mozilla Firefox). In the address bar of the browser place your mouse cursor and enter ‘https://www.arin.net/about/welcome/region’. ARIN website appears, in the search bar, enter the IP address of the target organization (here, the target organization is certifiedhacker.com, whose IP is 162.241.216.11), and then click the Search button. You will get the information about the network range along with the other information such as network type, registration information, etc.
This concludes the demonstration of locating network range using the ARIN Whois database search tool.
Close all open windows and document all the acquired information.
b. Perform Network Tracerouting in Windows Machines
The route is the path that the network packet traverses between the source and destination. Network tracerouting is a process of identifying the path and hosts lying between the source and destination. Network tracerouting provides critical information such as the IP address of the hosts lying between the source and destination, which enables you to map the network topology of the organization. Traceroute can be used to extract information about network topology, trusted routers, firewall locations, etc.
Here, we will perform network tracerouting using both Windows and Linux machines.
Open the Command Prompt window in Windows. Type
tracert www.certifiedhacker.com
and press Enter to view the hops that the packets made before reaching the destination.
Type
tracert /?
and press Enter to show the different options for the command, as shown in the screenshot.
Type
tracert -h 5 www.certifiedhacker.com
and press Enter to perform the trace, but with only 5 maximum hops allowed. After viewing the result, close the command prompt window.
c. Perform Network Tracerouting in Linux Machines
Launch the terminal in Parrot Security and enter the command
traceroute www.certifiedhacker.com
to view the hops that the packets made before reaching the destination.
This concludes the demonstration of performing network tracerouting using the Windows and Linux machines. You can also use other traceroute tools such as VisualRoute, Traceroute NG, etc. to extract additional network information of the target organization.
Close all open windows and document all acquired information.
4.9. Perform Footprinting using Various Footprinting Tools
Footprinting tools are used to collect basic information about the target systems in order to exploit them. Information collected by the footprinting tools contains the target’s IP location information, routing information, business information, address, phone number and social security number, details about the source of an email and a file, DNS information, domain information, etc.
a. Footprinting a Target using Recon-ng
Recon-ng is a web reconnaissance framework with independent modules and database interaction that provides an environment in which open-source web-based reconnaissance can be conducted. Here, we will use Recon-ng to perform network reconnaissance, gather personnel information, and gather target information from social networking sites.
Launch terminal in Parrot Security, and switch to root account and enter the home directory of root. In the Terminal window, type the command
recon-ng
and press Enter to launch the application.
Type help
and press Enter to view all the commands that allow you to add/delete records to a database, query a database, etc.
Type
marketplace install all
and press Enter to install all the modules available in recon-ng.
After the installation of modules, type the modules search
command and press Enter. This displays all the modules available in recon-ng. You will be able to perform network discovery, exploitation, reconnaissance, etc. by loading the required modules. Type the workspaces
command and press Enter. This displays the commands related to the workspaces.
Create a workspace in which to perform network reconnaissance. In this task, we shall be creating a workspace named CEH. To create the workspace, type the command
workspaces create CEH
and press Enter. This creates a workspace named CEH.
Enter workspaces list
. This displays a list of workspaces (along with the workspace added in the previous step) that are present within the workspaces databases.
Add a domain in which you want to perform network reconnaissance. Type the command
db insert domains
and press Enter. In the domain (TEXT) option type certifiedhacker.com
and press Enter. In the notes (TEXT) option press Enter. This adds certifiedhacker.com to the present workspace. You can view the added domain by issuing the show domains
command, as shown in the screenshot.
Harvest the hosts-related information associated with certifiedhacker.com by loading network reconnaissance modules such as brute_hosts, Netcraft, and Bing. Type modules load brute
and press Enter to view all the modules related to brute forcing. In this task, we will be using the recon/domains-hosts/brute_hosts module to harvest hosts. To load the recon/domains-hosts/brute_hosts module, type the
modules load recon/domains-hosts/brute_hosts
command and press Enter.
Type run
and press Enter. This begins to harvest the hosts, as shown in the screenshot.
Observe that hosts have been added by running the recon/domains-hosts/brute_hosts module.
You have now harvested the hosts related to certifiedhacker.com using the brute_hosts module. You can use other modules such as Netcraft and Bing to harvest more hosts. Use the back command to go back to the CEH attributes terminal. To resolve hosts using the Bing module, use the following commands:
back
modules load recon/domains-hosts/bing_domain_web
run
Now, perform a reverse lookup for each IP address (the IP address that is obtained during the reconnaissance process) to resolve to respective hostnames.
Type modules load reverse_resolve
command and press Enter to view all the modules associated with the reverse_resolve keyword. In this task, we will be using the recon/hosts-hosts/reverse_resolve module. Type the
modules load recon/hosts-hosts/reverse_resolve
command and press Enter to load the module. Issue the run
command to begin the reverse lookup.
Once done with the reverse lookup process, type the show hosts
command and press Enter. This displays all the hosts that are harvested so far, as shown in the screenshot.
Now, type the back
command and press Enter to go back to the CEH attributes terminal. Now, that you have harvested several hosts, we will prepare a report containing all the hosts.
Type the
modules load reporting
command and press Enter to view all the modules associated with the reporting keyword. In this lab, we will save the report in HTML format. So, the module used is reporting/html. Type the command
modules load reporting/html
and press Enter. Observe that you need to assign values for CREATOR and CUSTOMER options while the FILENAME value is already set, and you may change the value if required.
Type:
options set FILENAME /root/Desktop/results.html
and press Enter. By issuing this command, you are setting the report name as results.html and the path to store the file as Desktop.options set CREATOR [your name]
(here, Jason) and press Enter.options set CUSTOMER Certifiedhacker Networks
(since you have performed network reconnaissance on certifiedhacker.com domain) and press Enter.
Type the run
command and press Enter to create a report for all the hosts that have been harvested.
The generated report is saved to /root/Desktop/. Open it with browser from the folder. The generated report appears in the Firefox browser, displaying the summary of the harvested hosts.
You can expand the Hosts node to view all the harvested hosts, as shown in the screenshot.
Close all open windows.
Until now, we have used the Recon-ng tool to perform network reconnaissance on a target domain. Now, we will use Recon-ng to gather personnel information.
Launch terminal and recon-ng in Parrot Security. Add a workspace by issuing the command
workspaces create reconnaissance
and press Enter. This creates a workspace named reconnaissance. Set a domain and perform footprinting on it to extract contacts available in the domain. Type
modules load recon/domains-contacts/whois_pocs
and press Enter. This module uses the ARIN Whois RWS to harvest POC data from Whois queries for the given domain. Type the info
command and press Enter to view the options required to run this module. Type
options set SOURCE facebook.com
and press Enter to add facebook.com as a target domain. Type the run
command and press Enter.
The recon/domains-contacts/whois_pocs module extracts the contacts associated with the domain and displays them, as shown in the screenshot.
Type back
and press Enter to go back to the workspaces (reconnaissance) terminal.
Until now, we have obtained contacts related to the domains. Note down these contacts’ names.
Now, we will validate the existence of names (usernames) on specific websites.
The recon/profiles-profiles/namechk module validates the username existence of a specified contact. The contact we will use in this lab is Mark Zuckerberg. Type the command
modules load recon/profiles-profiles/namechk
and press Enter to load this module. Type
options set SOURCE MarkZuckerberg
and press Enter. This command sets MarkZuckerberg as the source for which you want to find the user existence on specific websites. Type run
and press Enter. This begins the search for the keyword MarkZuckerberg on various websites. Recon-ng begins to search the Internet for the presence of the username on websites and, if found, it returns the result stating “User Exists!”. (Here, no results are obtained.)
Type the back
command and press Enter to go back to the workspaces (reconnaissance) terminal. To find the existence of user-profiles on various websites, you need to load the recon/profiles-profiles/profiler module. Type the command
modules load recon/profiles-profiles/profiler
and press Enter. Type the command
options set SOURCE MarkZuckerberg
and press Enter. Type the run
command and press Enter. The recon/profiles-profiles/profiler module searches for this username and returns the URL of the profile (found with the matching username):
Type back
and press Enter to go back to the workspaces terminal.
Now that we have verified the user existence and obtained the profile URL, we will prepare a report containing the result.
Using reporting module to generate and viewreport as the previous steps.
We have now gathered information about the employee working in a target organization. This concludes the demonstration of gathering host information of the target domain and gathering personnel information of a target organization.
Close all open windows and document all the acquired information.
b. Footprinting a Target using Maltego
Maltego is a footprinting tool used to gather maximum information for the purpose of ethical hacking, computer forensics, and pentesting. It provides a library of transforms to discover data from open sources and visualizes that information in a graph format, suitable for link analysis and data mining. Maltego provides you with a graphical interface that makes seeing these relationships instant and accurate, and even making it possible to see hidden connections.
Here, we will gather a variety of information about the target organization using Maltego.
Launch Maltego in Parrot Security.
In the Maltego Community Edition window, click create a new graph icon from the top left corner in the toolbar. The New Graph (1) window appears, as shown in the screenshot.
In the left-pane of Maltego GUI, you can find the Entity Palette box, which contains a list of default built-in transforms. In the Infrastructure node under Entity Palette, observe a list of entities such as AS, DNS Name, Domain, IPv4 Address, URL, Website, etc. Drag the Website entity onto the New Graph (1) window. The entity appears on the new graph, with the www.paterva.com URL selected by default. If you are not able to view the entity as shown in the screenshot, click in the New Graph (1) window and scroll up, which will increase the size of the entity.
Double-click the name www.paterva.com and change the domain name to www.certifiedhacker.com; press Enter. Right-click the entity and select All Transforms. Maltego starts running the transform the To Server Technologies [Using BuiltWith] entity. Observe the status in the progress bar. Once Maltego completes the transforming server-side technologies, it displays the technology implemented on the server that hosts the website, as shown in the following screenshot.
After obtaining the built-in technologies of the server, you can search for related vulnerabilities and simulate exploitation techniques to hack them. To start a new transform, select all the entities, excluding the www.certifiedhacker.com website entity, and delete them. Now, right-click the www.certifiedhacker.com website entity and select All Transforms --> To Domains [DNS]. The domain corresponding to the website displays, as shown in the following screenshot.
Right-click the certifiedhacker.com entity and select All Transforms —> To DNS Name [Using Name Schema diction…].
Observe the status in the progress bar. This transform will attempt to test various name schemas against a domain and try to identify a specific name schema for the domain, as shown in the following screenshot.
After identifying the name schema, attackers attempt to simulate various exploitation techniques to gain sensitive information related to the resultant name schemas. For example, an attacker may implement a brute-force or dictionary attack to log in to ftp.certifiedhacker.com and gain confidential information. Select only the name schemas by dragging and deleting them. Right-click the certifiedhacker.com entity and select All Transforms --> To DNS Name - SOA (Start of Authority).
This returns the primary name server and the email of the domain administrator, as shown in the following screenshot.
By extracting the SOA related information, attackers attempt to find vulnerabilities in their services and architectures and exploit them. Select both the name server and the email by dragging and deleting them. Right-click the certifiedhacker.com entity and select All Transforms --> To DNS Name - MX (mail server).
This transform returns the mail server associated with the certifiedhacker.com domain, as shown in the following screenshot.
By identifying the mail exchanger server, attackers attempt to exploit the vulnerabilities in the server and, thereby, use it to perform malicious activities such as sending spam e-mails. Select only the mail server by dragging and deleting it. Right-click the certifiedhacker.com entity and select All Transforms --> To DNS Name - NS (name server).
This returns the name servers associated with the domain, as shown in the following screenshot.
By identifying the primary name server, an attacker can implement various techniques to exploit the server and thereby perform malicious activities such as DNS Hijacking and URL redirection. Select both the domain and the name server by dragging and deleting them. Right-click the entity and select All Transforms --> To IP Address [DNS].
This displays the IP address of the website, as shown in the following screenshot.
By obtaining the IP address of the website, an attacker can simulate various scanning techniques to find open ports and vulnerabilities and, thereby, attempt to intrude in the network and exploit them. Right-click the IP address entity and select All Transforms --> To location [city, country].
This transform identifies the geographical location of the IP address, as shown in the following screenshot.
By obtaining the information related to geographical location, attackers can perform social engineering attacks by making voice calls (vishing) to an individual in an attempt to leverage sensitive information. Now, right-click the www.certifiedhacker.com website entity and select All Transforms --> To Domains [DNS]. The domains corresponding to the website display, as shown in the screenshot.
Right-click the domain entity (certifiedhacker.com) and select All Transform --> To Entities from WHOIS [IBM Watson].
This transform returns the entities pertaining to the owner of the domain, as shown in the following screenshot.
By obtaining this information, you can exploit the servers displayed in the result or simulate a brute force attack or any other technique to hack into the admin mail account and send phishing emails to the contacts in that account. Apart from the aforementioned methods, you can perform footprinting on the critical employee from the target organization to gather additional personal information such as email addresses, phone numbers, personal information, image, alias, phrase, etc. In the left-pane of the Maltego GUI, click the Personal node under Entity Palette to observe a list of entities such as Email Address, Phone Numbers, Image, Alias, Phrase, etc. Apart from the transforms mentioned above, other transforms can track accounts and conversations of individuals who are registered on social networking sites such as Twitter. Extract all possible information. By extracting all this information, you can simulate actions such as enumeration, web application hacking, social engineering, etc., which may allow you access to a system or network, gain credentials, etc.
This concludes the demonstration of footprinting a target using Maltego. Close all open windows and document all the acquired information.
c. Footprinting a Target using OSRFramework
OSRFramework is a set of libraries that are used to perform Open Source Intelligence tasks. They include references to many different applications related to username checking, DNS lookups, information leaks research, deep web search, regular expressions extraction, and many others. It also provides a way of making these queries graphically as well as several interfaces to interact with such as OSRFConsole or a Web interface.
Launch terminal in Parrot Security, switch to root account, and return the home directory of root account. Use usufy.py
to check for the existence of a profile for given user details on different social networking platforms. Type
usufy.py -n [target user name or profile name] -p [target platform]
(here, the target user name or profile is Mark Zuckerberg and the target platforms are twitter, facebook, and youtube) and press Enter. -n is the list of nicknames to process and -p is for the platform for search.
Use domainfy.py to check with the existing domains using words and nicknames. Type
domainfy.py -n [Domain Name] -t all
(here, the target domain name is ECCOUNCIL) and press Enter. The tool will retrieve all the domains related to the target domain.
Similarly, you can use following OSRFramework packages to gather information about the target.
- searchfy.py - Gathers information about the users on social networking pages.
- mailfy.py – Gathers information about email accounts
- phonefy.py – Checks for the existence of a given series of phones
- entify.py – Extracts entities using regular expressions from provided URLs
This concludes the demonstration of gathering information about the target user aliases from multiple social media platforms using OSRFramework.
Close all open windows and document all the acquired information.
d. Footprinting a Target using BillCipher
BillCipher is an information gathering tool for a Website or IP address. Using this tool, you can gather information such as DNS Lookup, Whois lookup, GeoIP Lookup, Subnet Lookup, Port Scanner, Page Links, Zone Transfer, HTTP Header, etc. Here, we will use the BillCipher tool to footprint a target website URL.
Launch terminal in Parrot Security, switch to root account, and return the home directory of root account. type cd BillCipher
and press Enter to navigate to the BillCipher directory. Now, type python3 billcipher.py
and press Enter to launch the application. BillCipher application initializes. In the Do you want to collect information of a website or IP address? option, type website and press Enter. In the Enter the website address option, type the target website URL (here, www.certifiedhacker.com) and press Enter. BillCipher displays various available options that you can use to gather information regarding a target website. In the What information would you like to collect? option, type 1 to choose the DNS Lookup option and press Enter. The result appears, displaying the DNS information regarding the target website, as shown in the screenshot. In the Do you want to continue? option, type Yes
and press Enter to continue.
Following pervious steps, but now, choose the GeoIP Lookup option from the available information gathering options. The result appears, displaying the GeoIP Lookup information of the target website, as shown in the screenshot. In the Do you want to continue? option, type Yes and press Enter to continue.
Following pervious steps, but now, choose the Subnet Lookup option from the available information gathering options. The result appears, displaying the Subnet Lookup information of the target website. In the Do you want to continue? option, type Yes and press Enter to continue.
Following pervious steps, but now, choose the Page Links option from the available information gathering options.The result appears, displaying a list of Visible links and Hidden links of the target website, as shown in the screenshot. In the Do you want to continue? option, type Yes and press Enter to continue.
Following pervious steps, but now, choose the HTTP Header option from the available information gathering options. The result appears, displaying information regarding the HTTP header of the target website, as shown in the screenshot. In the Do you want to continue? option, type Yes and press Enter to continue.
Following pervious steps, but now, choose the Host Finder option from the available information gathering options. The result appears, displaying information regarding the IP address of the target website, as shown in the screenshot. In the Do you want to continue? option, type Yes and press Enter to continue.
Following pervious steps, but now, choose the Host DNS Finder option from the available information gathering options. The result appears, displaying information regarding host DNS of the target website, as shown in the screenshot. In the Do you want to continue? option, type Yes and press Enter to continue.
Following pervious steps, but now, choose the Website Copier option from the available information gathering options. The tool starts mirroring the target website; this will take approximately 5 minutes. After completion of the mirroring process, the mirrored website gets saved in the folder websource, as shown in the screenshot. In the Do you want to continue? option, type No and press Enter to exit BillCiper.
Now, click Places from the top section of the Desktop and click Home Folder from the context menu.
Switch to home folder of root account. The root directory window appears; navigate to BillCipher --> websource --> www.certifiedhacker.com --> www.certifiedhacker.com. Right-click the index.html file and navigate to Open With --> Firefox to open the mirrored website.The mirror target website (www.certifiedhacker.com) appears in the Mozilla Firefox browser, as shown in the screenshot.
Similarly, you can use other information gathering options to gather information about the target.
This concludes the demonstration of footprinting the target website URL using BillCipher. You can also use footprinting tools such as Recon-Dog, Th3Inspector, Raccoon, Orb, etc. to gather additional information related to the target company.
Close all open windows and document all the acquired information.
e. Footprinting a Target using OSINT Framework
OSINT Framework is an open source intelligence gathering framework that helps security professionals for performing automated footprinting and reconnaissance, OSINT research, and intelligence gathering. It is focused on gathering information from free tools or resources. This framework includes a simple web interface that lists various OSINT tools arranged by category and is shown as an OSINT tree structure on the web interface.
The OSINT Framework includes the following indicators with the available tools:
- (T) - Indicates a link to a tool that must be installed and run locally
- (D) - Google Dork
- ( R ) - Requires registration
- (M) - Indicates a URL that contains the search term and the URL itself must be edited manually
Here, we will use the OSINT Framework to explore footprinting categories and associated tools.
Launch a browser, type https://osintframework.com/ in search bar and press enter. OSINT Framework website appears; you can observe the OSINT tree on the left side of screen, as shown in the screenshot.
Clicking on any of the categories such as Username, Email Address, or Domain Name will make many useful resources appear on the screen in the form of a sub-tree. Click the Username category and click to expand the Username Search Engines and Specific Sites sub-categories. You can observe a list of OSINT tools filtered by sub-categories (Username Search Engines and Specific Sites sub-categories).
From the list of available tools under the Username Search Engines category, click on the Namechk tool to navigate to the Namechk website. The Namechk website appears, as shown in the screenshot.
Close the current tab to navigate back to the OSINT Framework webpage. Similarly, you can explore other tools from the list of mentioned tools under the Username Search Engines and Specific Sites sub-categories.
Now, click the Domain Name category, and its sub-categories appear. Click to expand the Whois Records sub-category. A list of tools under the Whois Records sub-category appears; click the Domain Dossier tool. The Domain Dossier website appears, as shown in the screenshot.
Close the current tab to navigate back to the OSINT Framework webpage. Now, click the Metadata category and click the FOCA tool from a list of available tools. The FOCA website appears, displaying information about the tool along with its download link, as shown in the screenshot.
Similarly, you can explore other available categories such as Email Address, IP Address, Social Networks, Instant Messaging, etc. and the tools associated with each category. Using these tools, you can perform footprinting on the target organization.
This concludes the demonstration of performing footprinting using the OSINT Framework.
Close all open windows and document all the acquired information.
标签:website,information,search,Footprinting,target,v11,Module,Enter,com 来源: https://blog.csdn.net/taof211/article/details/110822541