Search results for: bi-directional long and short-term memory networks
6900 The Underground Ecosystem of Credit Card Frauds
Authors: Abhinav Singh
Abstract:
Point Of Sale (POS) malwares have been stealing the limelight this year. They have been the elemental factor in some of the biggest breaches uncovered in past couple of years. Some of them include • Target: A Retail Giant reported close to 40 million credit card data being stolen • Home Depot : A home product Retailer reported breach of close to 50 million credit records • Kmart: A US retailer recently announced breach of 800 thousand credit card details. Alone in 2014, there have been reports of over 15 major breaches of payment systems around the globe. Memory scrapping malwares infecting the point of sale devices have been the lethal weapon used in these attacks. These malwares are capable of reading the payment information from the payment device memory before they are being encrypted. Later on these malwares send the stolen details to its parent server. These malwares are capable of recording all the critical payment information like the card number, security number, owner etc. All these information are delivered in raw format. This Talk will cover the aspects of what happens after these details have been sent to the malware authors. The entire ecosystem of credit card frauds can be broadly classified into these three steps: • Purchase of raw details and dumps • Converting them to plastic cash/cards • Shop! Shop! Shop! The focus of this talk will be on the above mentioned points and how they form an organized network of cyber-crime. The first step involves buying and selling of the stolen details. The key point to emphasize are : • How is this raw information been sold in the underground market • The buyer and seller anatomy • Building your shopping cart and preferences • The importance of reputation and vouches • Customer support and replace/refunds These are some of the key points that will be discussed. But the story doesn’t end here. As of now the buyer only has the raw card information. How will this raw information be converted to plastic cash? Now comes in picture the second part of this underground economy where-in these raw details are converted into actual cards. There are well organized services running underground that can help you in converting these details into plastic cards. We will discuss about this technique in detail. At last, the final step involves shopping with the stolen cards. The cards generated with the stolen details can be easily used to swipe-and-pay for purchased goods at different retail shops. Usually these purchases are of expensive items that have good resale value. Apart from using the cards at stores, there are underground services that lets you deliver online orders to their dummy addresses. Once the package is received it will be delivered to the original buyer. These services charge based on the value of item that is being delivered. The overall underground ecosystem of credit card fraud works in a bulletproof way and it involves people working in close groups and making heavy profits. This is a brief summary of what I plan to present at the talk. I have done an extensive research and have collected good deal of material to present as samples. Some of them include: • List of underground forums • Credit card dumps • IRC chats among these groups • Personal chat with big card sellers • Inside view of these forum owners. The talk will be concluded by throwing light on how these breaches are being tracked during investigation. How are credit card breaches tracked down and what steps can financial institutions can build an incidence response over it.Keywords: POS mawalre, credit card frauds, enterprise security, underground ecosystem
Procedia PDF Downloads 4396899 Examining Social Connectivity through Email Network Analysis: Study of Librarians' Emailing Groups in Pakistan
Authors: Muhammad Arif Khan, Haroon Idrees, Imran Aziz, Sidra Mushtaq
Abstract:
Social platforms like online discussion and mailing groups are well aligned with academic as well as professional learning spaces. Professional communities are increasingly moving to online forums for sharing and capturing the intellectual abilities. This study investigated dynamics of social connectivity of yahoo mailing groups of Pakistani Library and Information Science (LIS) professionals using Graph Theory technique. Design/Methodology: Social Network Analysis is the increasingly concerned domain for scientists in identifying whether people grow together through online social interaction or, whether they just reflect connectivity. We have conducted a longitudinal study using Network Graph Theory technique to analyze the large data-set of email communication. The data was collected from three yahoo mailing groups using network analysis software over a period of six months i.e. January to June 2016. Findings of the network analysis were reviewed through focus group discussion with LIS experts and selected respondents of the study. Data were analyzed in Microsoft Excel and network diagrams were visualized using NodeXL and ORA-Net Scene package. Findings: Findings demonstrate that professionals and students exhibit intellectual growth the more they get tied within a network by interacting and participating in communication through online forums. The study reports on dynamics of the large network by visualizing the email correspondence among group members in a network consisting vertices (members) and edges (randomized correspondence). The model pair wise relationship between group members was illustrated to show characteristics, reasons, and strength of ties. Connectivity of nodes illustrated the frequency of communication among group members through examining node coupling, diffusion of networks, and node clustering has been demonstrated in-depth. Network analysis was found to be a useful technique in investigating the dynamics of the large network.Keywords: emailing networks, network graph theory, online social platforms, yahoo mailing groups
Procedia PDF Downloads 2396898 Influential Parameters in Estimating Soil Properties from Cone Penetrating Test: An Artificial Neural Network Study
Authors: Ahmed G. Mahgoub, Dahlia H. Hafez, Mostafa A. Abu Kiefa
Abstract:
The Cone Penetration Test (CPT) is a common in-situ test which generally investigates a much greater volume of soil more quickly than possible from sampling and laboratory tests. Therefore, it has the potential to realize both cost savings and assessment of soil properties rapidly and continuously. The principle objective of this paper is to demonstrate the feasibility and efficiency of using artificial neural networks (ANNs) to predict the soil angle of internal friction (Φ) and the soil modulus of elasticity (E) from CPT results considering the uncertainties and non-linearities of the soil. In addition, ANNs are used to study the influence of different parameters and recommend which parameters should be included as input parameters to improve the prediction. Neural networks discover relationships in the input data sets through the iterative presentation of the data and intrinsic mapping characteristics of neural topologies. General Regression Neural Network (GRNN) is one of the powerful neural network architectures which is utilized in this study. A large amount of field and experimental data including CPT results, plate load tests, direct shear box, grain size distribution and calculated data of overburden pressure was obtained from a large project in the United Arab Emirates. This data was used for the training and the validation of the neural network. A comparison was made between the obtained results from the ANN's approach, and some common traditional correlations that predict Φ and E from CPT results with respect to the actual results of the collected data. The results show that the ANN is a very powerful tool. Very good agreement was obtained between estimated results from ANN and actual measured results with comparison to other correlations available in the literature. The study recommends some easily available parameters that should be included in the estimation of the soil properties to improve the prediction models. It is shown that the use of friction ration in the estimation of Φ and the use of fines content in the estimation of E considerable improve the prediction models.Keywords: angle of internal friction, cone penetrating test, general regression neural network, soil modulus of elasticity
Procedia PDF Downloads 4156897 Distributed Key Management With Less Transmitted Messaged In Rekeying Process To Secure Iot Wireless Sensor Networks In Smart-Agro
Authors: Safwan Mawlood Hussien
Abstract:
Internet of Things (IoT) is a promising technology has received considerable attention in different fields such as health, industry, defence, and agro, etc. Due to the limitation capacity of computing, storage, and communication, IoT objects are more vulnerable to attacks. Many solutions have been proposed to solve security issues, such as key management using symmetric-key ciphers. This study provides a scalable group distribution key management based on ECcryptography; with less transmitted messages The method has been validated through simulations in OMNeT++.Keywords: elliptic curves, Diffie–Hellman, discrete logarithm problem, secure key exchange, WSN security, IoT security, smart-agro
Procedia PDF Downloads 1196896 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks
Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah
Abstract:
Ant colony based routing algorithms are known to grantee the packet delivery, but they suffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.Keywords: ad-hoc network, MANET, ant colony routing, position based routing
Procedia PDF Downloads 4256895 The Effects of Normal Aging on Reasoning Ability: A Dual-Process Approach
Authors: Jamie A. Prowse Turner, Jamie I. D. Campbell, Valerie A. Thompson
Abstract:
The objective of the current research was to use a dual-process theory framework to explain these age-related differences in reasoning. Seventy-two older (M = 80.0 years) and 72 younger (M = 24.6 years) adults were given a variety of reasoning tests (i.e., a syllogistic task, base rate task, the Cognitive Reflection Test, and a perspective manipulation), as well as independent tests of capacity (working memory, processing speed, and inhibition), thinking styles, and metacognitive ability, to account for these age-related differences. It was revealed that age-related differences were limited to problems that required Type 2 processing and were related to differences in cognitive capacity, individual difference factors, and strategy choice. Furthermore, older adults’ performance can be improved by reasoning from another’s’ perspective and cannot, at this time, be explained by metacognitive differences between young and older adults. All of these findings fit well within a dual-process theory of reasoning, which provides an integrative framework accounting for previous findings and the findings presented in the current manuscript.Keywords: aging, dual-process theory, performance, reasoning ability
Procedia PDF Downloads 1916894 Nexus of Pakistan Stock Exchange with World's Top Five Stock Markets after Launching China Pakistan Economic Corridor
Authors: Abdul Rauf, Xiaoxing Liu, Waqas Amin
Abstract:
Stock markets are fascinating more and more conductive to each other due to liberalization and globalization trends in recent years. China Pakistan Economic Corridor (CPEC) has dragged Pakistan stock exchange to the new heights and global investors are making investments to reap its benefits. So, in investors and government perspective, the study focuses co-integration of Pakistan stock exchange with world’s five big economies i-e US, China, England, Japan, and France. The time period of study is seven years i-e 2010 to 2016 and daily values of major indices of corresponding stock exchanges collected. All variables of that particular study are stationary at first difference confirmed by unit root test. The study Johansen system co integration test for analysis of data along with Granger causality test is performed for result purpose. Co integration test asserted that Pakistan stock exchange integrated with Shanghai stock exchange (SSE) and NIKKEI stock exchange in short run. Granger causality test also proclaimed these results. But NASDAQ, FTSE, DAX not co integrated and Granger cause at a short run but long run these markets are bonded with Pakistan stock exchange (KSE). VECM also confirmed this liaison in short and long run. Investors, therefore, need to be updated regarding co-integration of world’s stock exchanges to ensure well diversified and risk adjusted high returns. Equally, governments also need updated status so that they could reduce co-integration through multiple steps and hence drag investors for diversified investment.Keywords: CPEC, DAX, FTSE, liberalization, NASDAQ, NIKKEI, SSE, stock markets
Procedia PDF Downloads 3026893 Practice of Social Innovation in School Education: A Study of Third Sector Organisations in India
Authors: Prakash Chittoor
Abstract:
In the recent past, it is realised especially in third sector that employing social innovation is crucial for achieving viable and long lasting social transformation. In this context, education is one among many sectors that have opened up itself for such move where employing social innovation emerges as key for reaching out to the excluded sections who are often failed to get support from either policy or market interventions. In fact, education is being as a crucial factor for social development is well understood at both academic and policy level. In order to move forward to achieve better results, interventions from multiple sectors may be required as its reach cultivates capabilities and skill of the deprived in order to ensure both market and social participation in the long run. Despite state’s intervention, it is found that still millions of children are out of school due to lack of political will, lapses in policy implementation and neoliberal intervention of marketization. As a result, universalisation of elementary education became as an elusive goal to poor and marginalised sections where state obtain constant pressure by corporate sector to withdraw from education sector that led convince in providing quality education. At this juncture, the role of third sector organizations plays is quite remarkable. Especially, it has evolved as a key player in education sector to reach out to the poor and marginalised in the far-flung areas. These organisations work in resources constrain environment, yet, in order to achieve larger social impact they adopt various social innovations from time to time to reach out to the unreached. Their attempts not only limited to just approaching the unreached children but to retain them for long-time in the schooling system in order to ripe the results for their families and communities. There is a need to highlight various innovative ways adopted and practiced by the third sector organisations in India to achieve the elusive goal of universal access of primary education with quality. With this background, the paper primarily attempts to present an in-depth understanding about innovative practices employed by third sectors organisations like Isha Vidya through government schools adoption programme in India where it engages itself with government and build capabilities among the government teachers to promote state run schooling with quality and better infrastructure. Further, this paper assess whether such innovative attempts succeeded in to achieving universal quality education in the areas where it operates and draws implications for State policy.Keywords: school education, third sector organisations, social innovation, market domination
Procedia PDF Downloads 2626892 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping
Authors: Andre Slonopas, Zona Kostic, Warren Thompson
Abstract:
Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory
Procedia PDF Downloads 1856891 A Review on the Importance of Nursing Approaches in Nutrition of Children with Cancer
Authors: Ş. Çiftcioğlu, E. Efe
Abstract:
In recent years, cancer has been at the top of diseases that cause death in children. Adequate and balanced nutrition plays an important role in the treatment of cancer. Cancer and cancer treatment is affecting food intake, absorption and metabolism, causing nutritional disorders. Appropriate nutrition is very important for the cancerous child to feel well before, during and after the treatment. There are various difficulties in feeding children with cancer. These are the cancer-related factors. Other factors are environmental and behavioral. As health professionals who spend more time with children in the hospital, nurses should be able to support the children on nutrition and help them to have balanced nutrition. This study aimed to evaluate the importance of nursing approaches in the nutrition of children with cancer. This article is planned as a review article by searching the literature on this field. Anorexia may develop due to psychogenic causes or chemotherapeutic agents or accompanying infections and nutrient uptake may be reduced. In addition, stomatitis, mucositis, taste and odor changes in the mouth, the feeling of nausea, vomiting and diarrhea can also reduce oral intake and result in significant losses in the energy deficit. In assessing the nutritional status of children with cancer, determining weight loss and good nutrition is essential anamnesis of a child. Some anthropometric measurements and biochemical tests should be used to evaluate the nutrition of the child. The nutritional status of pediatric cancer patients has been studied for a long time and malnutrition, in particular under nutrition, in this population has long been recognized. Yet, its management remains variable with many malnourished children going unrecognized and consequently untreated. Nutritional support is important to pediatric cancer patients and should be integrated into the overall treatment of these children.Keywords: cancer treatment, children, complication, nutrition, nursing approaches
Procedia PDF Downloads 2206890 Body Composition Evaluation among High Intensity and Long Term Walking Distance Participants
Authors: Priscila Vitorino, Jeeziane Rezende, Edison Pereira, Adrielly Silva, Weimar Barroso
Abstract:
Body composition insight during physical activity is relevant to follow up sports income since it can be important and actuate in velocity, resistance, potency, and has an effect on force and agility. The purpose of this study was to identify anthropometric profile, evaluate and correlate body mass index and bioimpedance behavior during the days of Caminhada Ecológica de Goiás - Brasil. A longitudinal study was performed with 25 male participants, with an average age of 45.6±9.1 years. All patients were actives. Body composition was evaluated by body mass index (BMI) measurement and bioimpedance procedures. Both were collected 20 days before walking beginning (A0) and in the four days along the same (A1, A2, A3 e A4). Data were collected in the end of each walking day at athletes accommodations. Final distance during walking route was 308 km in five days, with an average of 62km/day and 7,6 km/hour, and an average temperature of 30°C. Data are represented with mean and standard deviation. ANOVA (Bonferroni pos test) was used to compare frequent measurements between the days. Pearson's correlation test was used to correlate BMI with lean mass, fat mass, and water. BMI decreased from A0 to A1, A2 and A3 (p < 0,01) and increased on A4 (p < 0,01). No changes were observed concerning fat percentage (p=0,60), lean mass (p=0,10) and body water composition (p=0,09). A positive and moderate correlation between BMI and fat percentage was observed; an inverse and moderate correlation between BMI, lean mass and body water composition occurred. Total body mass increased during high intensity and long term walking distance. However, the values of body fat, lean mass and water were maintained.Keywords: aerobic exercise, body composition, metabolism, sports
Procedia PDF Downloads 3106889 A Machine Learning Based Method to Detect System Failure in Resource Constrained Environment
Authors: Payel Datta, Abhishek Das, Abhishek Roychoudhury, Dhiman Chattopadhyay, Tanushyam Chattopadhyay
Abstract:
Machine learning (ML) and deep learning (DL) is most predominantly used in image/video processing, natural language processing (NLP), audio and speech recognition but not that much used in system performance evaluation. In this paper, authors are going to describe the architecture of an abstraction layer constructed using ML/DL to detect the system failure. This proposed system is used to detect the system failure by evaluating the performance metrics of an IoT service deployment under constrained infrastructure environment. This system has been tested on the manually annotated data set containing different metrics of the system, like number of threads, throughput, average response time, CPU usage, memory usage, network input/output captured in different hardware environments like edge (atom based gateway) and cloud (AWS EC2). The main challenge of developing such system is that the accuracy of classification should be 100% as the error in the system has an impact on the degradation of the service performance and thus consequently affect the reliability and high availability which is mandatory for an IoT system. Proposed ML/DL classifiers work with 100% accuracy for the data set of nearly 4,000 samples captured within the organization.Keywords: machine learning, system performance, performance metrics, IoT, edge
Procedia PDF Downloads 1956888 The Integration of Patient Health Record Generated from Wearable and Internet of Things Devices into Health Information Exchanges
Authors: Dalvin D. Hill, Hector M. Castro Garcia
Abstract:
A growing number of individuals utilize wearable devices on a daily basis. The usage and functionality of these wearable devices vary from user to user. One popular usage of said devices is to track health-related activities that are typically stored on a device’s memory or uploaded to an account in the cloud; based on the current trend, the data accumulated from the wearable device are stored in a standalone location. In many of these cases, this health related datum is not a factor when considering the holistic view of a user’s health lifestyle or record. This health-related data generated from wearable and Internet of Things (IoT) devices can serve as empirical information to a medical provider, as the standalone data can add value to the holistic health record of a patient. This paper proposes a solution to incorporate the data gathered from these wearable and IoT devices, with that a patient’s Personal Health Record (PHR) stored within the confines of a Health Information Exchange (HIE).Keywords: electronic health record, health information exchanges, internet of things, personal health records, wearable devices, wearables
Procedia PDF Downloads 1286887 An Interesting Case of Management of Life Threatening Calcium Disequilibrium in a Patient with Parathyroid Tumor
Authors: Rajish Shil, Mohammad Ali Houri, Mohammad Milad Ismail, Fatimah Al Kaabi
Abstract:
The clinical presentation of Primary hyperparathyroidism can vary from simple asymptomatic hypercalcemia to severe life-threatening hypercalcemic crisis with multi-organ dysfunction, which can be due to parathyroid adenoma or sometimes with malignant cancer. This cascade of clinical presentation can lead to a diagnostic and therapeutic challenge for treating the disease. We are presenting a case of severe hypercalcemic crisis due to parathyroid adenoma with an emphasis on early management, diagnosis, and interventions to prevent any lifelong complications and any permanent organ dysfunction. A 30 years old female with a history of primary Infertility, admitted to Al Ain Hospital critical care unit with Acute Severe Necrotizing Pancreatitis. She initially had a 1-month history of abdominal pain on and off, for which she was treated conservatively with no much improvement, and later on, she developed life-threatening severe pancreatitis, which required her to be admitted to the critical care unit. She was transferred from a private healthcare facility, where she was found to have a very high level of calcium up to 15mmol/L. She received systemic Zoledronic Acid, which lowered her calcium level transiently and later was increased again. She went on to develop multiple end-organ damages along with multiple electrolytes disturbances. She was found to have high levels of Parathyroid hormone, which was correlated with a parathyroid mass on the neck via radiological imaging. After a long course of medical treatment to lower the calcium to a near-normal level, parathyroidectomy was done, which showed parathyroid adenoma on histology. She developed hungry bone syndrome after the surgery and pancreatic pseudocyst after resolving of pancreatitis. She required aggressive treatment with Intravenous calcium for her hypocalcemia as she received zoledronic acid at the beginning of the disease. Later on, she was discharged on long term calcium and other electrolytes supplements. In patients presenting with hypercalcemia, it is prudent to investigate and start treatment early to prevent complications and end-organ damage from hypercalcemia and also to treat the primary cause of the hypercalcemia, with conscious follow up to prevent hypocalcemic complications after treatment. It is important to follow up patients with parathyroid adenomas for a long period in order to detect any recurrence of the tumor or to make sure if the primary tumor is either benign or malignant.Keywords: hypercalcemia, pancreatitis, hypocalcemia, hyperparathyroidism
Procedia PDF Downloads 1236886 IT System in the Food Supply Chain Safety, Application in SMEs Sector
Authors: Mohsen Shirani, Micaela Demichela
Abstract:
Food supply chain is one of the most complex supply chain networks due to its perishable nature and customer oriented products, and food safety is the major concern for this industry. IT system could help to minimize the production and consumption of unsafe food by controlling and monitoring the entire system. However, there have been many issues in adoption of IT system in this industry specifically within SMEs sector. With this regard, this study presents a novel approach to use IT and tractability systems in the food supply chain, using application of RFID and central database.Keywords: food supply chain, IT system, safety, SME
Procedia PDF Downloads 4776885 Cardiopulmonary Resuscitation Performance Efficacy While Wearing a Powered Air-Purifying Respirator
Authors: Jun Young Chong, Seung Whan Kim
Abstract:
Introduction: The use of personal protective equipment for respiratory infection control in cardiopulmonary resuscitation (CPR) is a physical burden to healthcare providers. It matters how long CPR quality according to recommended guidelines can be maintained under these circumstances. It was investigated whether chest compression time was appropriate for a 2-minute shift and how long it was maintained in accordance with the guidelines under such conditions. Methods: This prospective crossover simulation study was performed at a single center from September 2020 to October 2020. Five indicators of CPR quality were measured during the first and second sessions of the study period. All participants wore a Level D powered air-purifying respirator (PAPR), and the experiment was conducted using a Resusci Anne manikin, which can measure the quality of chest compressions. Each participant conducted two sessions. In session one, 2-minutes of chest compressions followed by a 2-minute rest was repeated twice; in session two, 1-minute of chest compressions followed by a 1-minute rest was repeated four times. Results: All 34 participants completed the study. The deep and sufficient compression rate was 65.9 ± 13.1 mm in the 1-minute shift group and 61.5 ± 30.5 mm in the 2-minute shift group. The mean depth was 52.8 ±4.3 mm in the 1-minute shift group and 51.0 ± 6.1 mm in the 2-minute shift group. In these two values, there was a statistically significant difference between the two sessions. There was no statistically significant difference in the other CPR quality values. Conclusions: It was suggested that the different standard of current 2-minute to 1-minute cycles due to a significant reduction in the quality of chest compression in cases of CPR with PAPR.Keywords: cardiopulmonary resuscitation, chest compression, personal protective equipment, powered air-purifying respirator
Procedia PDF Downloads 1146884 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization
Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın
Abstract:
There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.Keywords: aircraft, fatigue, joint, life, optimization, prediction.
Procedia PDF Downloads 1756883 Impact of Mucormycosis Infection In Limb Salvage for Trauma Patients
Authors: Katie-Beth Webster
Abstract:
Mucormycosis is a rare opportunistic fungal infection that, if left untreated, can cause large scale tissue necrosis and death. There are a number of cases of this in the literature, most commonly in the head and neck region arising from sinuses. It is also usually found in immunocompromised patient subgroups. This study reviewed a number of cases of mucormycosis in previously fit and healthy young trauma patients to assess predisposing factors for infection and adequacy of current treatment paradigms. These trauma patients likely contracted the fungal infection from the soil at the site of the incident. Despite early washout and debridement of the wounds at the scene of the injury and on arrival in hospital, both these patients contracted mucormycosis. It was suspected that inadequate early debridement of soil contaminated limbs was one of the major factors that can lead to catastrophic tissue necrosis. In both cases, this resulted in the patients having a higher level of amputation than would have initially been required based on the level of their injury. This was secondary to cutaneous and soft tissue necrosis secondary to the fungal infiltration leading to osteomyelitis and systemic sepsis. In the literature, it appears diagnosis is often protracted in this condition secondary to inadequate early treatment and long processing times for fungal cultures. If fungal cultures were sent at the time of first assessment and adequate debridements are performed aggressively early, it could lead to these critically unwell trauma patients receiving appropriate antifungal and surgical treatment earlier in their episode of care. This is likely to improve long term outcomes for these patients.Keywords: mucormycosis, plastic surgery, osteomyelitis, trauma
Procedia PDF Downloads 2086882 There Is No Meaningful Opportunity in Meaningless Data: Why It Is Unconstitutional to Use Life Expectancy Tables in Post-Graham Sentences
Authors: Stacie Nelson Colling, Adele Cummings
Abstract:
The United States Supreme Court recently announced that it is unconstitutional to sentence a child to life without parole for non-homicide offenses, and that each child so situated must be afforded a meaningful opportunity for release from prison in his lifetime. The Court also declared that it is unconstitutional to impose a mandatory sentence of life without parole on a child for homicide offenses. Across the United States, attorneys and advocates continue to litigate issues surrounding the implementation of these legal principles. Some states have held that any sentence to a finite term of years, no matter how long, is not the same as ‘life’ and therefore does not violate the constitution. Other states have held that a sentence to a term of years that is less than the expected life of that particular child is not unconstitutional. In Colorado, the courts have routinely looked to life expectancy estimates from governmental organizations to determine how long a particular child is expected to live. They then compare that the date that the child is expected to be eligible for parole, and if the child is expected to still be living when he is eligible for parole, the sentence is deemed constitutional. This paper argues that it is inappropriate, reckless, unconstitutional and not scientifically sound to use such estimates in determining whether a child will have a meaningful opportunity for release from prison and life outside of prison before he dies. This paper argues that the opportunity for release must mean more than a probability that a child will be released before his death, and that it must include an opportunity for a meaningful life outside of prison (not just the opportunity to be released and then die on the outside). The paper further argues that life expectancy estimates cannot guide a court or a legislature in determining whether a sentence is or is not constitutional.Keywords: life without parole, life expectancy, juvenile sentencing, meaningful opportunity for release from prison
Procedia PDF Downloads 3946881 Development of a Real-Time Brain-Computer Interface for Interactive Robot Therapy: An Exploration of EEG and EMG Features during Hypnosis
Authors: Maryam Alimardani, Kazuo Hiraki
Abstract:
This study presents a framework for development of a new generation of therapy robots that can interact with users by monitoring their physiological and mental states. Here, we focused on one of the controversial methods of therapy, hypnotherapy. Hypnosis has shown to be useful in treatment of many clinical conditions. But, even for healthy people, it can be used as an effective technique for relaxation or enhancement of memory and concentration. Our aim is to develop a robot that collects information about user’s mental and physical states using electroencephalogram (EEG) and electromyography (EMG) signals and performs costeffective hypnosis at the comfort of user’s house. The presented framework consists of three main steps: (1) Find the EEG-correlates of mind state before, during, and after hypnosis and establish a cognitive model for state changes, (2) Develop a system that can track the changes in EEG and EMG activities in real time and determines if the user is ready for suggestion, and (3) Implement our system in a humanoid robot that will talk and conduct hypnosis on users based on their mental states. This paper presents a pilot study in regard to the first stage, detection of EEG and EMG features during hypnosis.Keywords: hypnosis, EEG, robotherapy, brain-computer interface (BCI)
Procedia PDF Downloads 2566880 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups
Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley
Abstract:
Significant long-term investment projects can involve complex decisions. These are often described as capital projects, and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives; these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in the complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with the perception of veracity and validity of the data presented; this impacted the ability of group to reach consensus and, therefore, for decisions to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making
Procedia PDF Downloads 1316879 Fault Location Detection in Active Distribution System
Authors: R. Rezaeipour, A. R. Mehrabi
Abstract:
Recent increase of the DGs and microgrids in distribution systems, disturbs the tradition structure of the system. Coordination between protection devices in such a system becomes the concern of the network operators. This paper presents a new method for fault location detection in the active distribution networks, independent of the fault type or its resistance. The method uses synchronized voltage and current measurements at the interconnection of DG units and is able to adapt to changes in the topology of the system. The method has been tested on a 38-bus distribution system, with very encouraging results.Keywords: fault location detection, active distribution system, micro grids, network operators
Procedia PDF Downloads 7896878 Recognition of Tifinagh Characters with Missing Parts Using Neural Network
Authors: El Mahdi Barrah, Said Safi, Abdessamad Malaoui
Abstract:
In this paper, we present an algorithm for reconstruction from incomplete 2D scans for tifinagh characters. This algorithm is based on using correlation between the lost block and its neighbors. This system proposed contains three main parts: pre-processing, features extraction and recognition. In the first step, we construct a database of tifinagh characters. In the second step, we will apply “shape analysis algorithm”. In classification part, we will use Neural Network. The simulation results demonstrate that the proposed method give good results.Keywords: Tifinagh character recognition, neural networks, local cost computation, ANN
Procedia PDF Downloads 3346877 Event Data Representation Based on Time Stamp for Pedestrian Detection
Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita
Abstract:
In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption
Procedia PDF Downloads 976876 Improvements and Implementation Solutions to Reduce the Computational Load for Traffic Situational Awareness with Alerts (TSAA)
Authors: Salvatore Luongo, Carlo Luongo
Abstract:
This paper discusses the implementation solutions to reduce the computational load for the Traffic Situational Awareness with Alerts (TSAA) application, based on Automatic Dependent Surveillance-Broadcast (ADS-B) technology. In 2008, there were 23 total mid-air collisions involving general aviation fixed-wing aircraft, 6 of which were fatal leading to 21 fatalities. These collisions occurred during visual meteorological conditions, indicating the limitations of the see-and-avoid concept for mid-air collision avoidance as defined in the Federal Aviation Administration’s (FAA). The commercial aviation aircraft are already equipped with collision avoidance system called TCAS, which is based on classic transponder technology. This system dramatically reduced the number of mid-air collisions involving air transport aircraft. In general aviation, the same reduction in mid-air collisions has not occurred, so this reduction is the main objective of the TSAA application. The major difference between the original conflict detection application and the TSAA application is that the conflict detection is focused on preventing loss of separation in en-route environments. Instead TSAA is devoted to reducing the probability of mid-air collision in all phases of flight. The TSAA application increases the flight crew traffic situation awareness providing alerts of traffic that are detected in conflict with ownship in support of the see-and-avoid responsibility. The relevant effort has been spent in the design process and the code generation in order to maximize the efficiency and performances in terms of computational load and memory consumption reduction. The TSAA architecture is divided into two high-level systems: the “Threats database” and the “Conflict detector”. The first one receives the traffic data from ADS-B device and provides the memorization of the target’s data history. Conflict detector module estimates ownship and targets trajectories in order to perform the detection of possible future loss of separation between ownship and each target. Finally, the alerts are verified by additional conflict verification logic, in order to prevent possible undesirable behaviors of the alert flag. In order to reduce the computational load, a pre-check evaluation module is used. This pre-check is only a computational optimization, so the performances of the conflict detector system are not modified in terms of number of alerts detected. The pre-check module uses analytical trajectories propagation for both target and ownship. This allows major accuracy and avoids the step-by-step propagation, which requests major computational load. Furthermore, the pre-check permits to exclude the target that is certainly not a threat, using an analytical and efficient geometrical approach, in order to decrease the computational load for the following modules. This software improvement is not suggested by FAA documents, and so it is the main innovation of this work. The efficiency and efficacy of this enhancement are verified using fast-time and real-time simulations and by the execution on a real device in several FAA scenarios. The final implementation also permits the FAA software certification in compliance with DO-178B standard. The computational load reduction allows the installation of TSAA application also on devices with multiple applications and/or low capacity in terms of available memory and computational capabilitiesKeywords: traffic situation awareness, general aviation, aircraft conflict detection, computational load reduction, implementation solutions, software certification
Procedia PDF Downloads 2856875 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives
Authors: Roberto Cabezas H
Abstract:
The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance
Procedia PDF Downloads 1426874 Short-Term versus Long-Term Effect of Waterpipe Smoking Exposure on Cardiovascular Biomarkers in Mice
Authors: Abeer Rababa'h, Ragad Bsoul, Mohammad Alkhatatbeh, Karem Alzoubi
Abstract:
Introduction: Tobacco use is one of the main risk factors to cardiovascular diseases (CVD) and atherosclerosis in particular. WPS contains several toxic materials such as: nicotine, carcinogens, tar, carbon monoxide and heavy metals. Thus, WPS is considered to be as one of the toxic environmental factors that should be investigated intensively. Therefore, the aim of this study is to investigate the effect of WPS on several cardiovascular biological markers that may cause atherosclerosis in mice. The study also conducted to study the temporal effects of WPS on the atherosclerotic biomarkers upon short (2 weeks) and long-term (8 weeks) exposures. Methods: mice were exposed to WPS and heart homogenates were analyzed to elucidate the effects of WPS on matrix metalloproteinase (MMPs), endothelin-1 (ET-1) and, myeloperoxidase (MPO). Following protein estimation, enzyme-linked immunosorbent assays were done to measure the levels of MMPs (isoforms 1, 3, and 9), MPO, and ET-1 protein expressions. Results: our data showed that acute exposure to WPS significantly enhances the levels of MMP-3, MMP- 9, and MPO expressions (p < 0.05) compared to their corresponding control. However, the body was capable to normalize the level of expressions for such parameters following continuous exposure for 8 weeks (p > 0.05). Additionally, we showed that the level of ET-1 expression was significantly higher upon chronic exposure to WPS compared to both control and acute exposure groups (p < 0.05). Conclusion: Waterpipe exposure has a significant negative effect on atherosclerosis and the enhancement of the atherosclerotic biomarkers expression (MMP-3 and 9, MPO, and ET-1) might represent an early scavenger of compensatory efforts to maintain cardiac function after WP exposure.Keywords: atherosclerotic biomarkers, cardiovascular disease, matrix metalloproteinase, waterpipe
Procedia PDF Downloads 3526873 Right to Return and Narrative in Refugee Camps: Case Study in Palestinian Displacement
Authors: Naomi I. Austin
Abstract:
Following WWII, the Geneva Conventions and Universal Declaration of Human Rights declared the right to return an unalienable right. The right to return has been disputed by the Israeli government and upheld as an individual by prominent Palestinian activists. Those who contest the Palestinian right to return argue that it would effectively end the state of Israel. After the conquest of Lebanon, the concept of a two-state solution has been effectively shut down. This research paper will seek to utilize interviews from NGO actors and those displaced to be gathered from fieldwork conducted in refugee camps and bases of international actors, exploring durable and multilateral solutions for not only the refugee crisis but the forced displacement of Palestinians that go beyond state actors and government entities. The research will center on the perspective of those displaced to generate a plausible solution to mitigate negative effects on displaced persons. This paper will seek to address whether the right to return is plausible with the expansion of Israeli territorial conquest and the impact of the Israeli expansion on migrations within the Mediterranean region and the EU, especially with policies of integration into the host community.Keywords: durable solutions, forced displacement, protracted conflict, refugee studies, narrative building, memory, right to return
Procedia PDF Downloads 156872 Voices of Dissent: Case Study of a Digital Archive of Testimonies of Political Oppression
Authors: Andrea Scapolo, Zaya Rustamova, Arturo Matute Castro
Abstract:
The “Voices in Dissent” initiative aims at collecting and making available in a digital format, testimonies, letters, and other narratives produced by victims of political oppression from different geographical spaces across the Atlantic. By recovering silenced voices behind the official narratives, this open-access online database will provide indispensable tools for rewriting the history of authoritarian regimes from the margins as memory debates continue to provoke controversy among academic and popular transnational circles. In providing an extensive database of non-hegemonic discourses in a variety of political and social contexts, the project will complement the existing European and Latin-American studies, and invite further interdisciplinary and trans-national research. This digital resource will be available to academic communities and the general audience and will be organized geographically and chronologically. “Voices in Dissent” will offer a first comprehensive study of these personal accounts of persecution and repression against determined historical backgrounds and their impact on collective memory formation in contemporary societies. The digitalization of these texts will allow to run metadata analyses and adopt comparatist approaches for a broad range of research endeavors. Most of the testimonies included in our archive are testimonies of trauma: the trauma of exile, imprisonment, torture, humiliation, censorship. The research on trauma has now reached critical mass and offers a broad spectrum of critical perspectives. By putting together testimonies from different geographical and historical contexts, our project will provide readers and scholars with an extraordinary opportunity to investigate how culture shapes individual and collective memories and provides or denies resources to make sense and cope with the trauma. For scholars dealing with the epistemological and rhetorical analysis of testimonies, an online open-access archive will prove particularly beneficial to test theories on truth status and the formation of belief as well as to study the articulation of discourse. An important aspect of this project is also its pedagogical applications since it will contribute to the creation of Open Educational Resources (OER) to support students and educators worldwide. Through collaborations with our Library System, the archive will form part of the Digital Commons database. The texts collected in this online archive will be made available in the original languages as well as in English translation. They will be accompanied by a critical apparatus that will contextualize them historically by providing relevant background information and bibliographical references. All these materials can serve as a springboard for a broad variety of educational projects and classroom activities. They can also be used to design specific content courses or modules. In conclusion, the desirable outcomes of the “Voices in Dissent” project are: 1. the collections and digitalization of political dissent testimonies; 2. the building of a network of scholars, educators, and learners involved in the design, development, and sustainability of the digital archive; 3. the integration of the content of the archive in both research and teaching endeavors, such as publication of scholarly articles, design of new upper-level courses, and integration of the materials in existing courses.Keywords: digital archive, dissent, open educational resources, testimonies, transatlantic studies
Procedia PDF Downloads 1066871 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN
Authors: Mohamed Gaafar, Evan Davies
Abstract:
Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN
Procedia PDF Downloads 298