Research document

 Please go through the attached document with instructions and requirements completely before providing the answer

Module 7 – Course Insights & Reflections

Each of the Weekly Module assignments presented a picture of the network security control requirements to maintain a secured network – it’s a “lot” of work!  In an essay, provide your thoughts and comments on the issues and decisions you had to make on the following:

Module 1:  Network security design and tools

Module 2:  Security policies and programs to support the C.I.A. Triad (Confidentiality, Integrity and Availability)

Module 3:  Security procedures for each of the security tools in your network design

Module 4:  Creating a Risk Assessment and Business Impact Analysis

Module 5:  Creating an Incident Response Plan (IRP)

Module 6:  Creating a Disaster Recovery Plan (DRP)

Operation Security

 

A tenet of telecommunications says the more people who access a network, the more valuable the network becomes.  This is called Metcalfe’s Law.  When organizations implement security policies, there are pressures and trade-offs ~~ Chapter nine examines different types of users on networks as it reviews an individual’s need for access & how those needs can lead to risks.

  • How can the use of security policies reduce risk? Explain
  • How can a SAP reduce risk?  Explain
  • Why are end-users considered the “weakest” link in regards to implementing security policies and controls? Explain

Data Science Case Analysis

Final Case Analysis:

There are several CSV files attached , start with the word document to understand the nature of the data and broad expectations for the final case analysis. You are expected to explore and perform exploratory data analysis and the final analysis.

Data Details: You are given six years of lending data (2012 – 2017) in csv format. The data files are relatively larger than what you have used during this course so far. The size of each file is different and depends upon the number of loans the company issued in a year. It can be noted that the file size are relatively larger 2015 onward, which is when the company went public and started lending more loans. Each file has 31 columns (variables) and the description of each column is provided in the DataDictionary.xls file. 

In addition to that, you are also given the states characteristics in a file called states.csv. This file contains demographic information like population size, median income, unemployment rate etc. 

Lastly, you are given a regions file called states_regions.csv that contains larger regions and divisions that each state falls in. For example, New Hampshire is in the Northeast region and New England division. 

There are three sections to this case: Merging and cleaning (15 points), Data Analysis (60 points), Visualization (25 points) totaling 100 points. 

Merging and Cleaning 

Stack all six Lending Club files together on top of each other. Now join the states.csv file with the stacked file using state name as the primary key. Finally, merge the state_regions file with the combined file so that you have one large file containing lending club and states geographic and demographic information. 

Analysis 

Use the above file to analyze and answer the following questions:

1) Find the distribution of number of loans by state, regions and divisions. Describe in your own words the geographic differences in the number of loans. Also, analyze your results by comparing number of loans per capita. Did you notice any missing states in the Lending Club data? If yes, then find out why. 

2) Compare the average amount of loans granted by all states and divisions. Which states and divisions have the highest and lowest average loan amounts?

3) Compare the average interest rate charged and average loan amount by the loan Grade. Do you notice any patterns? 

4) Run a frequency distribution of number of loans, average loan amount and average interest rate for each state by year (2012 through 2017). Describe the changing patterns in those numbers. 

5) Is there a relationship with the population size of a state and the average loan amount given? Is there a relationship between Grade of loans and median income level in a state?

6) This is an open-ended question where you are asked to share an interesting fact that you found through data analysis. 1) 

Visualization

1) Create a plot of interest rates and Grade or a loan and describe the pattern. 

2) Create a map of US states and color code the map with the average amount of loans given. 

3) Show visually the relationship between the annual income of the recipient and the loan amount obtained from Lending Club

4) Create a plot that shows the relationship between the length of employment and amount of loan obtained. 

5) Create a “regional” map and show an interesting relationship of your liking. 

Python API- Vacationpy

 

Now let’s use your skills in working with weather data to plan future vacations. Use jupyter-gmaps and the Google Places API for this part of the assignment.

* **Note:** if you having trouble displaying the maps try running `jupyter nbextension enable –py gmaps` in your environment and retry.

* Create a heat map that displays the humidity for every city from the part I of the homework.

![heatmap](Images/heatmap.png)

* Narrow down the DataFrame to find your ideal weather condition. For example:

* A max temperature lower than 80 degrees but higher than 70.

* Wind speed less than 10 mph.

* Zero cloudiness.

* Drop any rows that don’t contain all three conditions. You want to be sure the weather is ideal.

* **Note:** Feel free to adjust to your specifications but be sure to limit the number of rows returned by your API requests to a reasonable number.

* Using Google Places API to find the first hotel for each city located within 5000 meters of your coordinates.

* Plot the hotels on top of the humidity heatmap with each pin containing the **Hotel Name**, **City**, and **Country**.

![hotel map](Images/hotel_map.png)

As final considerations:

* Create a new GitHub repository for this project called `API-Challenge` (note the kebab-case). **Do not add to an existing repo**

* You must complete your analysis using a Jupyter notebook.

* You must use the Matplotlib or Pandas plotting libraries.

* For Part I, you must include a written description of three observable trends based on the data.

* You must use proper labeling of your plots, including aspects like: Plot Titles (with date of analysis) and Axes Labels.

* For max intensity in the heat map, try setting it to the highest humidity found in the data set.

## Hints and Considerations

* The city data you generate is based on random coordinates as well as different query times; as such, your outputs will not be an exact match to the provided starter notebook.

* You may want to start this assignment by refreshing yourself on the [geographic coordinate system](http://desktop.arcgis.com/en/arcmap/10.3/guide-books/map-projections/about-geographic-coordinate-systems.htm).

* Next, spend the requisite time necessary to study the OpenWeatherMap API. Based on your initial study, you should be able to answer basic questions about the API: Where do you request the API key? Which Weather API in particular will you need? What URL endpoints does it expect? What JSON structure does it respond with? Before you write a line of code, you should be aiming to have a crystal clear understanding of your intended outcome.

* A starter code for Citipy has been provided. However, if you’re craving an extra challenge, push yourself to learn how it works: [citipy Python library](https://pypi.python.org/pypi/citipy). Before you try to incorporate the library into your analysis, start by creating simple test cases outside your main script to confirm that you are using it correctly. Too often, when introduced to a new library, students get bogged down by the most minor of errors — spending hours investigating their entire code — when, in fact, a simple and focused test would have shown their basic utilization of the library was wrong from the start. Don’t let this be you!

* Part of our expectation in this challenge is that you will use critical thinking skills to understand how and why we’re recommending the tools we are. What is Citipy for? Why would you use it in conjunction with the OpenWeatherMap API? How would you do so?

* In building your script, pay attention to the cities you are using in your query pool. Are you getting coverage of the full gamut of latitudes and longitudes? Or are you simply choosing 500 cities concentrated in one region of the world? Even if you were a geographic genius, simply rattling 500 cities based on your human selection would create a biased dataset. Be thinking of how you should counter this. (Hint: Consider the full range of latitudes).

* Once you have computed the linear regression for one chart, the process will be similar for all others. As a bonus, try to create a function that will create these charts based on different parameters.

* Remember that each coordinate will trigger a separate call to the Google API. If you’re creating your own criteria to plan your vacation, try to reduce the results in your DataFrame to 10 or fewer cities.

* Lastly, remember — this is a challenging activity. Push yourself! If you complete this task, then you can safely say that you’ve gained a strong mastery of the core foundations of data analytics and it will only go better from here. Good luck!

Strategic Plans

Describe how a strategic plan for information technology can differ across organizations. 

Provide a real-world example. 

  1. In your first paragraph describe how a strategic plan for information technology can differ across organizations. Use examples in your description.
  2. In your second paragraph, use the example in the posting and discuss how an information technology strategic plan for a global company may differ than for a national (US) company.
  3. In the third paragraph, provide a real-world example, based upon the examples used in the first post.

python programming

 Write a program that encrypts and decrypts the user input. Note – Your input should be only lowercase characters with no spaces. Your program should have a secret distance given by the user that will be used for encryption/decryption. Each character of the user’s input should be offset by the distance value given by the user For Encryption:  Take the string and reverse the string.  Encrypt the reverse string with each character replaced with distance value (x) given by the user. For Decryption:  Take the string and reverse the string.  Decrypt the reverse string with each character replaced with distance value (x) given by the user. 

 Encryption process – udc -> xgf (encrypted) The program should ask the user for input to encrypt, and then display the resulting encrypted output. Next your program should ask the user for input to decrypt, and then display the resulting decrypted output. Enter phrase to Encrypt (lowercase, no spaces): cdu Enter distance value: 3 Result: xgf Enter phrase to Decrypt (lowercase, no spaces): xgf Enter distance value: 3 Result: cdu 

DM Dis-6

Include an Abstract and  introduction on the topic and a minimum of 3-5 references in proper APA format. 

No plagiarism.

On time delivery.

 

The topic I would like to choose is “Using data mining techniques to improve the financial/stock information systems… “

Refer to the attached dataset and address each of the following three questions. Each response should be 2-3 paragraphs with an explanation of all terms and reasons for your decisions.

 

  1. Identify a chart type that could be used to display different editorial perspectives of your dataset and explain why you felt it to be appropriate.
  2. Identify two other chart types that could show something about your subject matter, though maybe not confined to the data you are looking at.  In other words, chart types that could incorporate data not already included in your selected dataset.
  3. Review the classifying chart families in Chapter 6 of your textbook.  Select at least one chart type from each of the classifying chart families (CHRTS) that could portray different editorial perspectives about your subject.  This may include additional data, not already included in your selected dataset.