final_app_part1.png

Web Scraping Homework – Mission to Mars

 

Step 1 – Scraping

Complete your initial scraping using Jupyter Notebook, BeautifulSoup, Pandas, and Requests/Splinter.

  • Create a Jupyter Notebook file called mission_to_mars.ipynb and use this to complete all of your scraping and analysis tasks. The following outlines what you need to scrape.

NASA Mars News

# Example:

news_title = "NASA's Next Mars Mission to Investigate Interior of Red Planet"



news_p = "Preparation of NASA's next spacecraft to Mars, InSight, has ramped up this summer, on course for launch next May from Vandenberg Air Force Base in central California -- the first interplanetary launch in history from America's West Coast."

JPL Mars Space Images – Featured Image

  • Visit the url for JPL Featured Space Image here:  https://www.jpl.nasa.gov/spaceimages/?search=&category=Mars 
  • Use splinter to navigate the site and find the image url for the current Featured Mars Image and assign the url string to a variable called featured_image_url.
  • Make sure to find the image url to the full size .jpg image.
  • Make sure to save a complete url string for this image.
# Example:

featured_image_url = 'https://www.jpl.nasa.gov/spaceimages/images/largesize/PIA16225_hires.jpg'

Mars Weather

  • Visit the Mars Weather twitter account here and scrape the latest Mars weather tweet from the page. Save the tweet text for the weather report as a variable called mars_weather.
  • Note: Be sure you are not signed in to twitter, or scraping may become more difficult.
  • Note: Twitter frequently changes how information is presented on their website. If you are having difficulty getting the correct html tag data, consider researching Regular Expression Patterns and how they can be used in combination with the .find() method.
# Example:

mars_weather = 'Sol 1801 (Aug 30, 2017), Sunny, high -21C/-5F, low -80C/-112F, pressure at 8.82 hPa, daylight 06:09-17:55'

Mars Facts

  • Visit the Mars Facts webpage here and use Pandas to scrape the table containing facts about the planet including Diameter, Mass, etc.
  • Use Pandas to convert the data to a HTML table string.

Mars Hemispheres

  • Visit the USGS Astrogeology site here to obtain high resolution images for each of Mar’s hemispheres.
  • You will need to click each of the links to the hemispheres in order to find the image url to the full resolution image.
  • Save both the image url string for the full resolution hemisphere image, and the Hemisphere title containing the hemisphere name. Use a Python dictionary to store the data using the keys img_url and title.
  • Append the dictionary with the image url string and the hemisphere title to a list. This list will contain one dictionary for each hemisphere.
# Example:

hemisphere_image_urls = [

   {"title": "Valles Marineris Hemisphere", "img_url": "..."},

   {"title": "Cerberus Hemisphere", "img_url": "..."},

   {"title": "Schiaparelli Hemisphere", "img_url": "..."},

   {"title": "Syrtis Major Hemisphere", "img_url": "..."},

]

Step 2 – MongoDB and Flask Application

Use MongoDB with Flask templating to create a new HTML page that displays all of the information that was scraped from the URLs above.

  • Start by converting your Jupyter notebook into a Python script called scrape_mars.py with a function called scrape that will execute all of your scraping code from above and return one Python dictionary containing all of the scraped data.
  • Next, create a route called /scrape that will import your scrape_mars.py script and call your scrape function.
    • Store the return value in Mongo as a Python dictionary.
  • Create a root route / that will query your Mongo database and pass the mars data into an HTML template to display the data.
  • Create a template HTML file called index.html that will take the mars data dictionary and display all of the data in the appropriate HTML elements. Use the following as a guide for what the final product should look like, but feel free to create your own design.

final_app_part1.png final_app_part2.png

Step 3 – Submission

To submit your work to BootCampSpot, create a new GitHub repository and upload the following:

  1. The Jupyter Notebook containing the scraping code used.
  2. Screenshots of your final application.
  3. Submit the link to your new repository to BootCampSpot.

Hints

  • Use Splinter to navigate the sites when needed and BeautifulSoup to help find and parse out the necessary data.
  • Use Pymongo for CRUD applications for your database. For this homework, you can simply overwrite the existing document each time the /scrape url is visited and new data is obtained.
  • Use Bootstrap to structure your HTML template.

ITGE -1

 Do you feel that countries and companies need explicit strategies for technology development, given the tremendous amount of largely spontaneous creativity that occurs today, often in areas where new technologies are not expected to exert a great influence.  Why or why not? 

Need one page of content 

Write an essay of at least 500 words discussing the reasons for the two new auditing roles in Oracle 12c. Why did Oracle consider them necessary? What problems do they solve? How do they benefit companies?

Use the five paragraph format. Each paragraph must have at least five sentences. Include 3 quotes with quotation marks and cited in-line and in a list of references. Include an interesting meaningful title. Cite your sources in a clickable reference list at the end. Do not copy without providing proper attribution (quotation marks and in-line citations).

Need Assignment Help

  Go online and search for information that relates to ethical hacking  (white hat or gray hat hacking). Choose one of these areas explain why a  company might benefit from hiring someone to hack into their systems.  

  Your assignment should be 3-4 paragraphs in length.  (300-400 words)

I will expect APA formatting, citations, references

Python coding help

  

Task 2

Given two lists, write python code to print “True” if the two lists have at least one common element. For example, x = [1,2,3], y=[3,4,5], then the program should print “True” since there is a common element 3.

Hint: 

We can first calculate set(x) – set(y). set(x) – set(y) is to remove the element in x that also exists in y (3 in this case). So set(x) – set(y) will be equal to (1,2), which is a set. And its length is 2 if you calculate len(set(x) – set(y)).

if set(x) – set(y) has a smaller length than set(x), i.e. len(set(x) – set(y))

This will get easier after we learn the iteration structure and the conditional statement in the next few weeks. 

Essay

 Write an essay of at least 500 words discussing the reasons for the two new auditing roles in Oracle 12c. Why did Oracle consider them necessary? What problems do they solve? How do they benefit companies?  

Discussion

  After reading chapter 13,  analyze the advantages and disadvantages of digital signatures. You are also required to post a response to a minimum of two other students in the class by the end of the week.  You must use at least one scholarly resource.  Every discussion posting must be properly APA formatted. 

Assignment

 analyze asymmetric and symmetric encryption.  Evaluate the differences between the two of them and which one that you would determine is the most secure.  

The writing assignment requires a minimum of two written pages to evaluate the history.  You must use a minimum of three scholarly articles to complete the assignment.  The assignment must be properly APA formatted with a separate title and reference page.  

TFTP and FTP

 In this graded practice you will saving your router programming onto  permanent storage on your router as well as backup copies into a TFTP  and FTP server.