Cybersecurity

(a) How many policy documents does the ISO 27000 standard provide? Briefly describe the content areas covered by each of them.

(b) Compare the ISO 27000 series of documents with the NIST documents discussed in Chapter 8. Which areas, if any, are missing from the NIST documents? Identify the strengths and weaknesses of the NIST program compared to the ISO standard.

What is SANS SCORE and why is it useful? Review the security policy documents provided by SANS SCORE and discuss contents of the relevant documents available under each of the following categories: (a) Server Security (b) Application Security (c) Network Security (d) Incident Handling.

 What is the fundamental difference between a security management model and a security architecture model? Explain with an illustrative example.

(a) What are the key principles on which access control is founded?

(b) What two access control methods that use a state machine model to enforce security? Compare and contrast the two methods by explaining their similarities and differences.

What is separation of duties? Discuss the various ways through which this method can be used to improve an organization’s InfoSec practices.

Discussion and research reports for Cyber security

Need to present a Discussion with a word count of above 150+ words and each discussion need a separate reference link for sure.

1) Metaverse Cyber concerns (150+150+150=!50 = 600 words around) Need this same topic in 4 different formats and 4 different URL links as well needed

2) National cyber Incident Response Plan (NCIRP) = 150 words

3) Costs of a data breach (150 + 150 + 150 = 450 words) Need this same topic in three different formats and 3 different URL links as well needed

Need to present a research report on with a word count no more than 70-110 words(not more than the count provided) and should provide a separate 

URL reference link too

  

1) Metaverse Cyber concerns – ( Need this same topic in 4 different formats and 4 different URL links as well needed) (70+70+70+70= 280+ words)

2) National cyber Incident Response Plan (NCIRP) = 70 words

3)Costs of a data breach (70+70+70 = 210 words) Need this same topic in three different formats and 3 different URL links as well needed

 

It is suggested you use a Research Theme to help you stay focused, and to provide continuity throughout your research.  Here is a list of ideas, but this list is not all-inclusive: 

Current technologies available to support management functions,

Best Practices,

Future improvements/technologies, or

Other standards related to your specific field.

Note: The content should be in a general words with no technical jargons.

This question is from a cyber security subject so that the matter should relate to cyber security for sure and should connect to readers.

 NO PLAGIARISM STRICTLY  and do not use AI to get the copy paste information lik Chatgpt

Each one should be different and no each topic information should be similar to the other topic strictly.

Content should be unique and in a simple understanding way.

Deadline: 02/16/2023 11:59AM CST

Also provide me the separate files for discussion and the research reports instead of submitting in a single file.

H6

   Download and read the document and answer all questions in the document. Please see attached document H6 & APA Criteria doc.       

db

 Use a customer order form and follow the bottom-up database design  approach. Please include the website URL and an image of this form. 

  1. Find all the attributes on the form.
  2. Establish the dependenci es (determinants).
  3. Group attributes that have a common determinant into an entity type; name it.
  4. Find directly-related entity type pairs.
  5. Determine the connectivity for each pair.
  6. Draw the ERD.
  7. Review the ERD and update to be in 3NF if ERD from step 6 is not in 3NF.

No plagiarism

computer science final

Submit all your answers in one notebook file as (final_yourname.ipynb) 

Question 1 (80 pts) 

Sentiment Analysis helps data scientists to analyze any kind of data i.e., Business, Politics, Social Media, etc., For example, the IMDb dataset “movie_data.csv” file contains 25,000 highly polar ‘positive’ (12500) and ‘negative’ (12500) IMDB movie reviews (label negative review as ‘0’ and positive review as ‘1’).

 Similarly, “amazon_data.txt” and “yelp_data.txt” contain 1000 labeled negative review as ‘0’ and positive review as ‘1’ 

For further help, check the notebook sentiment_analysis.ipynb in Canvas and also explore the link: https://medium.com/@vasista/sentiment-analysis-using-svm338d418e3ff1 

Answer the following: 

a) Read all the above data files (.csv and .txt) in python Pandas DataFrame. For each dataset, make 70% as training and 30% as test sets. 

b) By using both CountVectorizer and TfidfVectorizer separately of the sklearn library , perform the Logistic regression classification in the IMDb dataset and evaluate the accuracies in the test set. 

c) Classify the Amazon dataset using Logistic Regression and Neural Network (two hidden layers) and compare the performances and show the confusion matrices. 

d) Generate classification model for the Yelp dataset with K-NN algorithms. Fit and test the model for different values for K (from 1 to 5) using a for loop and record and plot the KNN’s testing accuracy in a variable (scores). 

e) Generate prediction for the following reviews based Logistic regression classifier in Amazon dataset: Review1 = “SUPERB, I AM IN LOVE IN THIS PHONE”  Review 2 = “Do not purchase this product. 

My cell phone blast when I switched  the charger”

Question 2 (60 pts)

The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups. This data set is in-built in scikit, so you don’t need to download it explicitly.You can check the code here:

https://towardsdatascience.com/machine-learning-nlp-text-classification-using-scikit-learn-python-and-nltk-c52b92a7c73a

to load the data set directly in notebook (this might take few minutes, so patience). For example,

from sklearn.datasets import fetch_20newsgroups

twenty_train = fetch_20newsgroups(subset=’train’, shuffle=True)

a)By using bothCountVectorizer and TfidfVectorizer separately of the sklearn library, perform the Logistic regression classificationon the training set and show the confusing matrix and accuracy by predicting the  class labels in the test set.

b)Perform a Logistic  Regression classification and show the accuracy of the test set

c)Perform a K-means Clustering in the training set with K =20

d)Plot the accuracy (Elbow method) of different cluster sizes (5, 10, 15, 20, 25, 30)and determine  the best cluster size.

Question 3 (60 pts)

The Medical dataset “image_caption.txt”contains captions for 1000 images (ImageID).Let’s build a small search engine (you may explore to get some help: https://towardsdatascience.com/create-a-simple-search-engine-using-python-412587619ff5and https://www.machinelearningplus.com/nlp/cosine-similarity/)  by performing the following:

a)Read all the data files in python Pandas DataFrame.

b)Perform the necessary pre-processing task (e.g.,punctuation, numbers,stop word removal, etc.)

c)Create Term-Document Matrix with TF-IDF weighting

d)Calculate the similarity using cosine similarity and show the top ranked ten (10) images Based on the following query

“CT images of chest showing ground glass opacity”