Which of the following best describes the heuristic or behavior-based detection method?

Online Digital Officer Safety

Todd G. Shipley, Art Bowker, in Investigating Internet Crimes, 2014

Malware protection

Protecting your investigative computer from malware is another required basic step toward protecting your Internet investigations systems. A running antivirus application will help to prevent viruses and other potential attacks from compromising the investigator’s equipment and evidence collected. Antivirus application manufacturers provide products that assist the user in the prevention of computer virus infections. Generally, these products involve two techniques for detecting virus. The first and most prevalent technique uses antivirus signatures, which are … “a string of characters or numbers that makes up the signature that anti-virus programs are designed to detect. One signature may contain several virus signatures, which are algorithms or hashes that uniquely identify a specific virus” (Janssen, 2013). Antivirus software searches for these signatures on the hard drive and removable media (including the boot sectors of the disks) and Random Access Memory. If it finds a virus signature, it quarantines the file, with the anticipation of removing it from the system. The application vendor updates their virus signature database, which their software periodically checks for updates. The pitfall to this detection method is its vulnerability to a “zero-day threat.” For instance, a newly created virus’s signature takes time to be discovered and uploaded to the database. If the signature is not in the database, the antivirus application will not identify the virus if its only detection is through signatures.

Another method is heuristic analysis. In this approach, the antivirus software allows a suspected program to run in a controlled environment on the system before allowing it run on the user’s system (see Investigative Tips, Virtual Machines and Sandboxes in this chapter). If the suspected program performs any functions that are associated with malware, the antivirus application stops the program and notifies the user (Security News, 2013). The problem with this technique is it can lead to false positives. Because of these issues, many vendors have applications that blend the two approaches together.

Investigative Tip

Possible Conflicts of Antivirus Software

Beware that a common problem with virus applications is their incompatibility with each other. Installing multiple virus applications on the same computer can cause unexpected problems. Before installing any new virus program, uninstall any existing program first so that there is no conflict between the programs.

Be sure to update the programs (and their virus definitions) periodically. Setting the program to check for updates that you can manually install is a good idea. Set up a policy within the investigative team environment about when to do full system scans; otherwise, the programs may not provide you with the complete protection they can offer. Some of the programs require a system reboot and can run the antivirus program for hours to ensure a hard drive is virus clean. This is best down at night so as not to impact investigations. It is not recommended that these tools do automatic update installations. This prevents an update from forcing a reboot during the middle of an investigation. During set up of the software, ensure to select not to update automatically.

Investigative Tips

Commonly Used Antivirus Software:

Avast (www.avast.com): This program updates frequently, sometimes 2–3 times a day when a lot of changed viruses are going around. It also automatically updates itself multiple times a day.

AVG (www.grisoft.com): AVG updates more often than most commercial virus programs but is an effective antivirus program.

Bitdefender (www.bitdefender.com): One of their products not only has anti-malware features but also includes a firewall to monitor Internet and Wi-Fi connections.

Norton Antivirus (www.symantec.com): Symantec, maker of Norton Antivirus, is a major player in the antivirus community. Their product has been a standard for computer users for many years.

McAfee VirusScan (www.mcafee.com): McAfee is another mainstay in the antivirus community.

LavaSoft (www.lavasoftusa.com): Lavasoft made the original Adware removal program and has branched out into general antivirus support.

Malwarebytes (www.malwarebytes.org): Another popular malware protection software.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124078178000072

Botnet Detection: Tools and Techniques

Craig A. Schiller, ... Michael Cross, in Botnets, 2007

Intrusion Detection

Which of the following best describes the heuristic or behavior-based detection method?

Intrusion detection systems (IDSes) are either host or network based. A NIDS should focus on local and outgoing traffic flows as well as incoming Internet traffic, whereas a HIDS can pick up symptoms of bot activity at a local level that can't be seen over the network.

Which of the following best describes the heuristic or behavior-based detection method?

At either level, an IDS can focus on either anomaly detection or signature detection, though some are more or less hybrid.

Which of the following best describes the heuristic or behavior-based detection method?

IDS is important, but it should be considered part of an Internet prevention system strategy, whether it's part of a full-blown commercial system or one element of a multilayered defense.

Which of the following best describes the heuristic or behavior-based detection method?

Virus detection is, or should be, an understatement: It should sit at all levels of the network, from the perimeter to the desktop, and include preventative and recovery controls, not just detection.

Which of the following best describes the heuristic or behavior-based detection method?

Antivirus is capable of detecting a great deal more than simple viruses and is not reliant on simple detection of static strings. Scanners can detect known malware with a very high degree of accuracy and can cope with a surprisingly high percentage of unknown malware, using heuristic analysis.

Which of the following best describes the heuristic or behavior-based detection method?

However, bots are capable of not only sophisticated evasion techniques but present dissemination-related difficulties that aren't susceptible to straightforward technical solutions at the code analysis level.

Which of the following best describes the heuristic or behavior-based detection method?

There is a place for open-source antivirus as a supplement to commercial solutions, but it's not a direct replacement; it can't cover the same range of threats (especially older threats), even without considering support issues.

Which of the following best describes the heuristic or behavior-based detection method?

Snort is a signature-based NIDS with a sophisticated approach to rule sets, in addition to its capabilities as a packet sniffer and logger.

Which of the following best describes the heuristic or behavior-based detection method?

As well as writing your own Snort signatures, you can tap into a rich vein of signatures published by a huge group of Snort enthusiasts in the security community.

Which of the following best describes the heuristic or behavior-based detection method?

The flexibility of the signature facility is illustrated by four example signatures, one of which could almost be described as adding a degree of anomaly detection to the rule set.

Which of the following best describes the heuristic or behavior-based detection method?

Tripwire is an integrity management tool that uses a database of file signatures (message digests or checksums, not attack signatures) to detect suspicious changes to files.

Which of the following best describes the heuristic or behavior-based detection method?

The database can be kept more secure by keeping it on read-only media and using MD5 or snefru message digests.

Which of the following best describes the heuristic or behavior-based detection method?

The open-source version of Tripwire is limited in the platforms it covers. If the devices you want to protect are all POSIX compliant and you're not bothered about value-adds like support and enterprise-level management, and if you're happy to do some DIY, it might do very well.

Which of the following best describes the heuristic or behavior-based detection method?

Ken Thompson's “Reflections on Trusting Trust” makes the point that you can't have absolute trust in any code you didn't build from scratch yourself, including your compiler. This represents a weakness in an application that relies for its effectiveness on being installed to an absolutely clean environment.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749135850007X

Heuristics for Decision and Choice

P.M. Todd, in International Encyclopedia of the Social & Behavioral Sciences, 2001

1 Historical Overview

The term ‘heuristic’ (from a Greek root, ‘to discover’) was used for most of the twentieth century to refer to useful, even indispensable, strategies for finding solutions to problems that are difficult to approach by other means. Gestalt psychologists spoke of heuristic reasoning methods such as ‘looking around’ and ‘inspecting the problem’ to guide the search for useful information in the environment, while mathematicians employed heuristics including ‘examining special cases’, ‘exploiting related problems’, ‘decomposing and recombining’, and ‘working backwards’ (Groner 1983). These rather vague strategies were made more precise in computer-based models of human problem solving and reasoning largely based on the means-ends analysis heuristic, which sought some way to reduce the distance between the current partial-solution state and the goal state (e.g., in Newell and Simon's early General Problem Solver system; see Artificial Intelligence: Search). Such general purpose or ‘weak’ methods proved insufficient to tackle many problems, so research in AI in the 1970s turned to collecting domain-specific rules of thumb from specialists in a particular field and incorporating these into expert systems. Around the same time, mathematicians working in OR faced new results from computational complexity theory indicating that efficient algorithmic solutions to many classes of challenging combinatorial problems (such as the Traveling Salesman Problem) might not be found; as a consequence, they too turned to the search for problem-specific heuristics, though through invention rather than behavioral observation (Müller-Merbach 1981).

After 1970, though, heuristics gained a different connotation in psychology: fallible cognitive shortcuts that people often use in situations where logic or probability theory should be applied instead. The ‘heuristics-and-biases’ research program launched by Tversky and Kahneman (1974, Kahneman et al. 1982) emphasized how the use of heuristics can lead to systematic errors and lapses of reasoning (see Decision Biases, Cognitive Psychology of) indicating human irrationality. The heuristics studied (see Sect. 2) were often vaguely defined and broadly applicable to judgments made under uncertainty in any domain, akin to the weak methods explored earlier in AI. This negative view of heuristics and of the people who use them as ‘cognitive misers’ employing little information or cognition to reach biased conclusions has spread to many other social sciences, including economics (Rabin 1998) and law (Hanson and Kysar 1999). More recently, a new appreciation is emerging within psychology that heuristics may be the only available approach to decision making in the many problems where optimal logical solutions are computationally intractable or do not exist (as OR researchers realized), and that domain-specific decision heuristics may be more powerful than domain-general logical approaches in other problems (as AI found). This has led to the study of precisely-specified heuristics matched to particular decision tasks (see Sect. 3), and the ways that learning and evolution can achieve this match in human behavior (e.g., Payne et al. 1993, Gigerenzer et al. 1999). The existence of such evolved adaptive heuristics has already been widely accepted for other animals in research on rules of thumb in behavioral ecology (Gigerenzer et al. 1999).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B008043076700629X

Operational Smart Grid Security

Robert W. Griffin, Silvio La Porta, in Smart Grid Security, 2015

9.4 Action: Mitigation, Remediation and Recovery

Visibility and analytics enable effective action for recovery from incidents, remediation of vulnerabilities and mitigation of risk. For example, a cyber attack launched via a compromised communication network against the circuit breakers in a substation results in damage to transformers in the station. Because the disabled substation could result in loss of power for millions of customers, this is assessed as a high priority issue. The response team confirms that the right people within the company are handling the incident properly: managers are conducting forensic analyses to figure out what happened, crews have deployed to fix the problem and the response teams are coordinating with each other to restart systems and restore power.

Once the immediate crisis is over, remediation activity determines how the attack occurred and whether there is a vulnerability that can be addressed to reduce the risk of similar attacks in the future being able to succeed. At the same time, the risk management team investigates the failure scenario to determine whether there are mitigation strategies that would reduce the likelihood and impact of transformer failure, for whatever reason.

9.4.1 Action: Recovering from and Managing Incidents

Even in the best instrumented and most secure operational model, incidents have to be expected. In fact, the number of incidents queued up for handling may be extremely large. It is essential for the incident response system not only to prioritize the incidents in the queue, but also to eliminate false positives and other clutter from the queue. To do this, incidents should be automatically checked against a global repository of items that analysts have previously investigated and resolved, before the new incidents are put on the queue. The incident response system should continually learn from these previous incidents, updated threat information, changes to operational configurations and other data sources in order to simplify the analyst’s job as much as possible.

The incident response system should, as much as possible, prioritize the incidents in terms of known or potential impact on the business. The “Security Engineering Report on Smart Grids” by Hwang et al. (2012) explores this impact from two perspectives. The first is technical impact factors:

Loss of confidentiality. How much data may have been disclosed and how sensitive is it?

Loss of integrity: How much data could have been corrupted and how damaged is it?

Loss of availability: How much service may have been lost and how vital is it?

Loss of accountability: Is the incident traceable to one or more individuals?

The second is business impact factors:

Financial damage: how much financial damage may result from the incident?

Reputation damage: How much reputation damage that would harm the business may result from the incident?

Non-compliance: How much exposure to and risk of non-compliance does the incident introduce?

Privacy violation: How much personally identifiable information may have been disclosed?

A full understanding of the impact may not be reachable until after the investigation is complete, or even for some length of time after that. But the incident response system should provide what prioritization it can, while ensuring that information related to the periodization decision is available to the response team so that they can ensure that the most important incidents are addressed as quickly as possible.

Incident response system should provide a rich set of context about prospective problems. For instance, for an incident related to a suspicious file that may represent malware, the system should correlate suspicious behaviors about the file (e.g. a driver, a process, a DLL), capture what’s known about the file (e.g. file size, file attributes, MD5 file hash) through static and heuristic analysis, provide context on the file owner or user, and so on. Security analysts can then use this information to investigate if the file is malicious, and should be blacklisted, or non-malicious, and should be whitelisted. If an item is deemed malicious, all occurrences of the problem across the entire IT environment can be instantly identified. Then, once a remedy is determined, the security operations team can perform any necessary forensics investigations and/or clean all the affected endpoints.

Incident response systems should also ingest information from external sources to enrich the organization’s internal data sources for purposes of incident investigation and response. For example, the security analytics platform and management dashboard should aggregate and operationalize the best and most relevant intelligence and context from inside and outside the organization to accelerate the analysts’ decision making and workflows.

Remediation after malware infection is a complicated task. If the compromised machine is vital for the system availability, it is usually not always possible to use previously saved safe machine images. The incident response team should check all the machine environments to remove all potential access points that attackers could utilize. Cyber criminals tend to use different entrench techniques in a victims’ network. The term entrenchment is used to describe a technique used by the attackers that allows them to maintain unauthorized access into an enterprise network despite attempted remediation efforts by the victim. The victims’ machine can be compromised in a variety of ways; for example the attackers could install web shells, they can add malicious or modified DLL to running web servers, they can utilize RDP backdoors, hide malware that will commence malicious activity after a fixed period of time, etc…

It is always good practice before starting to clean infected machines, to monitor network traffic and search for similar traffic patterns or similar IP connections as well as checking for all the possible lateral movements analyzing forensically the victim’s machine. The aim is to perform this in a stealth way so the attacker is unaware they are under surveillance thereby avoiding the possibility that the attacker will lunch countermeasures to cover their evidence or start to deploy other entrenchment techniques to remain in the network. Forensics analysts should search not only for malicious software but also for legitimate software that could be installed on the machine for malicious purposes as well for misconfigurations created by the attackers.

An effective incident response team should be composed of malware experts, IT forensics analysts and network experts thereby giving the organization a wide competency and skills blend to enable successful detection, protection and investigation. Operations teams may be confronted with potential evidence of an infiltration or breach, but find themselves exploring potential causes without success for weeks or even months. When that happens, it helps to bring in people with specialized expertise and tools in incident response (IR). IR specialists can deploy technologies that capture activity on networks and endpoints in key segments of the IT environment. Based on the scans, analyses and supplemental information these technologies generate, experienced IR professionals can usually pinpoint where and how security breaches are occurring and shut down ongoing cyber attacks much faster than organizations can do on their own.

9.4.2 Action: Remediating Vulnerabilities and Anomalies

Responding to the incident itself is essential. But equally important is determining whether there were vulnerabilities that contributed to the incident’s occurrence or impact. These vulnerabilities may have been technological, such as software vulnerabilities that provided access for an attacker or that caused unexpected behavior in operation of a component. They may have been process issues that prevented an issue from being recognized until it had reached a critical level or that resulted in the initiation of a failure condition. Or they may be organizational, educational or other issues related to the structure and people of the organization, such as in individual vulnerability to social engineering attacks that resulted in malware infections.

The incident management system should support the determination of such vulnerabilities and assist in the remediation of those vulnerabilities, such as through the remediation planning shown in the Figure 9.6.

Which of the following best describes the heuristic or behavior-based detection method?

Figure 9.6. Remediation planning.

On a more comprehensive level, an incident may indicate a more fundamental issue in the operational model, such as in terms of missing or improperly instrumented controls. For example, the 2009 paper by the US Department of Energy calls out the vulnerabilities inherent in the older control systems that are currently deployed throughout the United States and that may have to be replaced: “The electric power industry relies heavily on control systems to manage and control the generation, transmission, and distribution of electric power. Many of the control systems in service today were designed for operability and reliability during a time when security was a low priority. Smart Grid implementation is going to require the installation of numerous advanced control system technologies along with greatly enhanced communication networks.” (p. 4) This book has disused many of these advanced technologies that may need to be considered as part not only in the initial development of the operational model but also in addressing incidents that occur.

9.4.3 Action: Mitigating Risk

The Smart Grid operational model includes the effective risk management discipline discussed earlier, employing a broad range of factors to make probabilistic decisions about risk and take prioritized actions, including alerts to response teams, to recover from incidents and remediate vulnerabilities. But an incident may also indicate the opportunity to take actions to mitigate the risk associated with that incident.

For example, a well-prepared security teams will know what the organization’s valuable information assets are and which systems, applications and users have access to them. Awareness of these parameters help security analysts narrow their field of investigation during a breach so they can address problems faster and with greater confidence. But a given incident may indicate that the security operations teams should conduct a breach readiness assessment or institute

Practice drills to improve the speed and efficacy of their reactions to cyber attacks. They may need to revise their inventory of high-value assets that must be protected based on their new knowledge of what is attractive to an attacker. They may need to review their security policies again business priorities and regulatory requirements.

The Figure 9.7, expanding on a similar diagram in the Popovik paper (2013), provides an example of a process for mitigating risk in response to incidents such as detected intrusions.

Which of the following best describes the heuristic or behavior-based detection method?

Figure 9.7. Risk mitigation.

Such a process can take advantage of operational incidents, regardless of whether they result from a security incident, equipment failure, natural disaster or any other cause, to enable organizations to take new actions to progressively improve their processes, optimize staffing and skills, modify their technology platforms, change their supplier relationships or take any other of the multiple of actions that could help them better address the risk of such an incident.

These improvements can be assisted by the technology advancements in big data and security analytics systems that deliver “imagine if” capabilities. The bounds of what’s imaginable are now being explored by operations professionals and business leaders together. For organizations concerned about an effective operational model, these “imagine if” scenarios often focus on injecting better intelligence and context into both operational and security practices. For example, if we apply new analytic approaches to historical data, what could we learn? What do the cyber attacks we’ve encountered tell us about our business and operational risks? If we add new log sources or external intelligence feeds to our data warehouse, what patterns could we look for that we couldn’t even imagine seeing before? What types of intelligence might help us hunt down threats or respond to operational incidents more quickly, including through automated capabilities that do not require human intervention?

An effective operational model for Smart Grid should ensure that this connection between incident response and the risk management process is established and effective.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128021224000092

Evaluation Methods

Kathy Baxter, ... Kelly Caine, in Understanding your Users (Second Edition), 2015

Pulling It All Together

In this chapter, we have discussed several methods for evaluating your product or service. There is a method available for every stage in your product life cycle and for every schedule or budget. Evaluating your product is not the end of the life cycle. You will want (and need) to continue other forms of user research so you continually understand the needs of your users and how to best meet them.

Case Study: Applying Cognitive Walkthroughs in Medical Device User Interface Design

Medtronic, Inc., is the world’s largest medical technology company. As a human factors scientist at Medtronic, my goal is to proactively understand the role of the user and the use environment, design products that minimize use error that could lead to user or patient harm, and maximize clinical efficiency and product competitiveness by promoting ease of learning and ease of use. I have been conducting human factors research in the Cardiac Rhythm and Disease Management (CRDM) division of Medtronic, which is the largest and oldest business unit of Medtronic. In this case study, I describe how I used one human factors user research technique, a lightweight cognitive walkthrough, on a heart failure project at CRDM.

Heart failure is a condition in which the heart does not pump enough blood to meet the body’s needs. According to the Heart Failure Society of America, this condition affects five million Americans with 400,000-700,000 new cases of heart failure diagnosed each year. Cardiac resynchronization therapy (CRT) is a treatment for symptoms associated with heart failure. CRT restores the coordinated pumping of the heart chambers by overcoming the delay in electrical conduction. This is accomplished by a CRT pacemaker, which includes a lead in the right atrium, a lead in the right ventricle, and a lead in the left ventricle. These leads are connected to a pulse generator that is placed in the patient’s upper chest. The pacemaker and the leads maintain coordinated pumping between the upper and the lower chambers of the heart, as well as the left and right chambers of the heart. The location of the leads and the timing of pacing are important factors for successful resynchronization. For patients with congestive heart failure who are at high risk of death due to their ventricles beating fast, a CRT pacemaker that includes a defibrillator is used for treatment.

The Attain Performa® quadripolar lead is Medtronic’s new left ventricle (LV) lead offering, which provides physicians more options to optimize CRT delivery. This lead provides 16 left pacing configurations that allow for electronic repositioning of the lead without surgery if a problem (e.g., phrenic nerve stimulation, high threshold) arises during implant or follow-up. Though the lead offers several programming options during implant and over the course of therapy long-term, the addition of 16 pacing configurations to programming has the potential to increase clinician workload. To reduce clinician workload and expedite clinical efficiency, Medtronic created VectorExpressTM, a smart solution that replaces the 15-30-minute effort involved in manually testing all the 16 pacing configurations through a one-button click. VectorExpressTM completes the testing in two to three minutes and provides electrical data that clinicians can use to determine the optimal pacing configuration. This feature is a big differentiator from the competitive offering.

Uniqueness of the Medical Domain

An important aspect that makes conducting human factors work in the medical device industry different from non-healthcare industries is the huge emphasis regulatory bodies place on minimizing user errors and use-related hazards caused by inadequate medical device usability. International standards on human factors engineering specify processes that medical device manufacturers should follow to demonstrate that a rigorous usability engineering process has been adopted and risks to user or patient safety have been mitigated. This means analytic techniques (e.g., task analysis, interviews, focus groups, heuristic analysis) as well as formative evaluations (e.g., cognitive walkthrough, usability testing) and validation testing with a production-equivalent system with at least 15 participants from each representative user group is required to optimize medical device design. Compliance to standards also requires maintenance of records showing that the usability engineering work has been conducted. Though a variety of user feedback techniques were employed in this project as well, this case study will focus on the use of a lightweight cognitive walkthrough with subject matter experts, which was employed to gather early feedback from users on design ideas before creating fully functional prototypes for rigorous usability testing. Cognitive walkthroughs are a great technique to discover users’ reactions to concepts that are being proposed earlier on in the product development life cycle, to determine whether we are going in the right direction.

Preparing for the Cognitive Walkthroughs

The cognitive walkthrough materials included the following:

An introduction of the Attain Performa quadripolar lead and the objective of the interview session.

Snapshots of user interface designs being considered, in a Microsoft PowerPoint format. Having a pictorial representation of the concepts makes it easier to communicate our thoughts with end users and, in turn, gauge users’ reactions.

Clinical scenarios that would help to evaluate the usefulness of the proposed feature. Specifically, participants were presented with two scenarios: implant, where a patient is being implanted with a CRT device, and follow-up, where a patient has come to the clinic complaining of phrenic nerve stimulation (i.e., hiccups). Data collection form is as follows: For each scenario, a table was created with “questions to be asked during the scenario” (e.g., “When will you use the test?” “How long would you wait for the automated test to run during an implant?” “Under what circumstances would you want to specify a subset of vectors on which you want to run the test?” “How would you use the information in the table to program a vector?”) and “user comments” as headers. Each question had its own tabular entry in the table.

Conducting the Cognitive Walkthroughs

Cognitive walkthroughs were conducted at Medtronic’s Mounds View, Minnesota, campus with physicians. A total of three cognitive walkthroughs (CWs) were conducted. Unlike studies that are conducted in a clinic or hospital where physicians take time out of their busy day to talk to us and where there is a higher potential of interruptions, studies conducted at Medtronic follow a schedule, with physicians dedicating their time to these sessions. Though three CWs at first glance seem like a small sample size, it is important to point out that we followed up with multiple rounds of usability testing with high-fidelity, interactive prototypes later on.

Each CW session included a human factors scientist and a research scientist. The purpose of including the research scientist was to have a domain expert who was able to describe the specifics of the VectorExpressTM algorithm. It is also good practice to include your project team in the research because this helps them understand user needs and motivations firsthand. Both the interviewers had printed copies of the introductory materials and the data collection forms. The PowerPoint slides illustrating the user interface designs were projected onto a big screen in the conference room.

The session began with the human factors scientist giving physicians an overview of the feature that Medtronic was considering and also describing the objective of the session. This was followed by the research scientist giving an overview of how the VectorExpressTM algorithm works—in other words, a description of how the algorithm is able to take the electrical measurements of all the LV pacing configurations. Then, using the context of an implant and follow-up scenario, the human factors scientist presented the design concepts and asked participants questions about how they envisioned the feature to be used. This was a “lightweight” CW, meaning that we did not ask participants each of the four standard questions as recommended by Polson et al. (1992). Time with the participants was extremely limited, and therefore, in order to get as much feedback about the design concepts as possible, we focused on interviewing participants deeply about each screen they saw. Both the interviewers recorded notes in their data collection forms.

Analyzing Information from Cognitive Walkthroughs

The human factors scientist typed up the notes from the CWs by typing in the responses to the questions in the data collection form. The “key takeaways” section was then generated for each CW session that was conducted. The document was then sent to the research scientist for review and edits. The report from each CW session was submitted to the cross-functional team (i.e., Systems Engineering, Software Engineering, and Marketing). Note that these one-on-one CW sessions were also preceded by a focus group with electrophysiologists to gather additional data from a larger group. After all of these sessions, we came together as a cross-functional team and identified key takeaways and implications for user interface design based on the learnings from the CWs and focus groups.

Next Steps

The feedback obtained from the CWs helped us to conclude that overall, we were going in the right direction, and we were able to learn how users would use the proposed feature during CRT device implants and follow-ups. The CWs also provided insights on the design of specific user interface elements.

In preparation for formative testing, we developed high-fidelity software prototypes and generated a test plan with a priori definition of usability goals and success criteria for each representative scenario. We worked with Medtronic field employees to recruit representative users for formative testing.

We also conducted a user error analysis on the proposed user interface, to evaluate potential user errors and any associated hazards.

A formative test plan was generated with a priori definition of usability goals and success criteria for each representative scenario.

We worked with Medtronic field employees to recruit representative users for formative testing.

Things to Remember

When conducting any research study, including CWs, flexibility is important. Research sessions with key opinion leaders rarely follow a set agenda. Sessions with highly-skilled users, such as electrophysiologists, involve a lot of discussion, with the physicians asking a lot of in-depth technical questions. Be prepared for that by projecting ahead of time the technical questions they might ask. I have in the past created a cheat sheet with a list of potential questions and answers to these. These cheat sheets should be developed with input from a technical expert.

Involve cross-functional partners such as the project systems engineer or the research scientist (who are domain experts) in the user research process. They have a much more in-depth understanding of the system that becomes complementary to the role of a human factors engineer.

Most research studies run into the issue of taking what users say at face value. It is important to question in depth the motivation behind a perceived user need before jumping to conclusions. In addition, it is important to triangulate the data with conclusions derived from other techniques, such as behavioral observations and formative testing.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128002322000146

Optimal placement of virtual network functions in software defined networks: A survey

Sedef Demirci, Seref Sagiroglu, in Journal of Network and Computer Applications, 2019

5.2 Interaction among VNFs

In the NFV architecture, there are many different types of VNFs serving for different purposes. For example, a stand-alone firewall, which is the first level of access control, filters the traffic by checking only packet header fields (source/destination IP address, port numbers etc.), while a DPI engine checks the payload of a packet using signature-based detection technologies along with heuristic and statistical analysis. So, placing firewall in front of the DPI logically as exemplified in Fig. 8, would be more reasonable than the vice versa. To this end, it is obligatory to analyze the interactions among different VNFs, in addition to considering the characteristics of each function (John et al., 2013; Doriguzzi-Corin et al., Salvadori).

Currently, there are some research efforts handling this problem as a “service function chaining approach” (Jarraya et al., 2015; Shameli-Sendi et al., 2015; Ben Jemaa et al., 2016; Xia et al., 2015; Mehraghdam et al., 2014; Sahhaf et al., 2015; Yang et al., 2016; Bhamare et al., 2017; Kim et al., 2016; Pham et al., 2017). However, they evaluate different VNF types in a general framework. That is, they do not handle each VNF type as different from others and do not examine the function scopes and relations deeply. Therefore, a good line of future research is considering dependencies and contradictions among VNFs to deploy them into most appropriate places in line with optimization objectives.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804519302760

A survey of malware detection in Android apps: Recommendations and perspectives for future research

Asma Razgallah, ... Kobra Khanmohammadi, in Computer Science Review, 2021

2.3 Other methods

Finally, we list in a third category static methods that do not squarely fall into API or source code analysis.

2.3.1 DroidRanger

The DroidRanger tool [33] detects the characteristic behaviors present in malware from several malicious families. It relies on a crawler to collect Android applications from existing Android markets and stores them in a local repository. For each application collected, DroidRanger extracts the fundamental properties associated with each application (requested permissions, author information, etc.) and organizes them into a central database.

DroidRanger performs two distinct detection processes. The first, for known malware, is based on a permission-based behavioral footprint. The second, for previously unknown malware, is based on a heuristic analysis of the app’s behavior, as reconstructed from the bytecode and the manifest file. Suspicious applications are then executed and monitored to verify if they actually display malicious behavior at runtime. If this is the case, the associated behavioral fingerprint will be extracted and included in the first detection process’ database.

This study was tested on the most popular applications of the year 2011, and yielded positive results. However, DroidRanger only covers free applications and only five Android markets, with a false negative rate of 4.2%.

2.3.2 DREBIN

Arp et al. created DREBIN [4], a tool that performs malware detection on the results of a static analysis of the applications. DREBIN’s feature set appears to be one of the most thorough of all the works we have surveyed. In all, they create 8 feature sets for each app, using data from the Android manifest file (including permissions, components and requested hardware), and from the decompiled .dex file (including selected API calls and network addresses). The entire feature set is constructed in linear time, without necessitating complex static analysis such as data flow analysis.

Detection is then performed using SVMs. In order to maintain a lightweight footprint on the end-users’ device, training is not performed on the smartphone itself. Instead, the classifier is trained offline, and the only resulting model is passed to the user. In order to provide explanations for its results, DREBIN’s classifier is trained not only to detect, but also to identify the features that lead to the application being flagged as malware. From these, DREBIN constructs a parametrized sentence that explain the reason of the verdict to the user.

DREBIN was tested using 131611 benign apps coming from the GooglePlay Store, as well as two other markets (one Chinese and one Russian), and 5560 malware samples from the Android Malware Genome Project [34]. It obtained a detection rate of 93%, with only 1% of false-positives, outperforming several anti-virus software on the same dataset.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1574013720304585

What are the two types of intrusion detection systems IDSS )?

There are two main types of IDSes based on where the security team sets them up: Network intrusion detection system (NIDS). Host intrusion detection system (HIDS).

Which of the following describes a false positive when using an IPS device?

A false positive can be defined as: An alert that indicates nefarious activity on a system that, upon further inspection, turns out to represent legitimate network traffic or behavior.