Search results for: apache kafka
41 Researching Apache Hama: A Pure BSP Computing Framework
Authors: Kamran Siddique, Yangwoo Kim, Zahid Akhtar
Abstract:
In recent years, the technological advancements have led to a deluge of data from distinctive domains and the need for development of solutions based on parallel and distributed computing has still long way to go. That is why, the research and development of massive computing frameworks is continuously growing. At this particular stage, highlighting a potential research area along with key insights could be an asset for researchers in the field. Therefore, this paper explores one of the emerging distributed computing frameworks, Apache Hama. It is a Top Level Project under the Apache Software Foundation, based on Bulk Synchronous Processing (BSP). We present an unbiased and critical interrogation session about Apache Hama and conclude research directions in order to assist interested researchers.Keywords: apache hama, bulk synchronous parallel, BSP, distributed computing
Procedia PDF Downloads 24940 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics
Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood
Abstract:
We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.Keywords: digital forensics, cloud computing, cyber security, spark, Kubernetes, Kafka
Procedia PDF Downloads 39139 Performance Comparison of Thread-Based and Event-Based Web Servers
Authors: Aikaterini Kentroti, Theodore H. Kaskalis
Abstract:
Today, web servers are expected to serve thousands of client requests concurrently within stringent response time limits. In this paper, we evaluate experimentally and compare the performance as well as the resource utilization of popular web servers, which differ in their approach to handle concurrency. More specifically, Central Processing Unit (CPU)- and I/O intensive tests were conducted against the thread-based Apache and Go as well as the event-based Nginx and Node.js under increasing concurrent load. The tests involved concurrent users requesting a term of the Fibonacci sequence (the 10th, 20th, 30th) and the content of a table from the database. The results show that Go achieved the best performance in all benchmark tests. For example, Go reached two times higher throughput than Node.js and five times higher than Apache and Nginx in the 20th Fibonacci term test. In addition, Go had the smallest memory footprint and demonstrated the most efficient resource utilization, in terms of CPU usage. Instead, Node.js had by far the largest memory footprint, consuming up to 90% more memory than Nginx and Apache. Regarding the performance of Apache and Nginx, our findings indicate that Hypertext Preprocessor (PHP) becomes a bottleneck when the servers are requested to respond by performing CPU-intensive tasks under increasing concurrent load.Keywords: apache, Go, Nginx, node.js, web server benchmarking
Procedia PDF Downloads 9638 Outcome of Obstetric Admission to General Intensive Care over a Period of 3 Years
Authors: Kamel Abdelaziz Mohamed
Abstract:
Intoduction:Inadequate knowledge about obstetric admission and infrequent dealing with the obstetric patients in ICU results in high mortality and morbidity. Aim of the work:To evaluate the indications, course, severity of illness, and outcome of obstetric patients admitted to the intensive care unit (ICU). Patients and Methods: We collected baseline data and acute physiology and chronic health evaluation II (APACHE II) scores. ICU mortality was the primary outcome. Results: Seventy obstetric patients were admitted to the ICU over 3 years, 36 of these patients (51.4 %) were admitted during the antepartum period. The primary obstetric indication for ICU admission was pregnancy-induced hypertension (22 patients, 31.4%), followed by sepsis (8 patients, 11.4%) as the leading non-obstetric admission. The mean APACHE II score was 19.6. The predicted mortality rate based on the APACHE II score was 22%, however, only 4 maternal deaths (5.7%) were among the obstetric patients admitted to the ICU. Conclusion: Evaluation of obstetric patients by (APACHE II) scores showed higher predicted mortality rate, however the overall mortality was lower. Regular follow up, together with early detection of complications and prompt ICU admission necessitating proper management by specialized team can improve mortality.Keywords: obstetric, complication, postpartum, sepsis
Procedia PDF Downloads 30637 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network
Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour
Abstract:
Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network
Procedia PDF Downloads 16836 Bug Localization on Single-Line Bugs of Apache Commons Math Library
Authors: Cherry Oo, Hnin Min Oo
Abstract:
Software bug localization is one of the most costly tasks in program repair technique. Therefore, there is a high claim for automated bug localization techniques that can monitor programmers to the locations of bugs, with slight human arbitration. Spectrum-based bug localization aims to help software developers to discover bugs rapidly by investigating abstractions of the program traces to make a ranking list of most possible buggy modules. Using the Apache Commons Math library project, we study the diagnostic accuracy using our spectrum-based bug localization metric. Our outcomes show that the greater performance of a specific similarity coefficient, used to inspect the program spectra, is mostly effective on localizing of single line bugs.Keywords: software testing, bug localization, program spectra, bug
Procedia PDF Downloads 14135 A High-Level Co-Evolutionary Hybrid Algorithm for the Multi-Objective Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a hybrid distributed algorithm has been suggested for the multi-objective job shop scheduling problem. Many new approaches are used at design steps of the distributed algorithm. Co-evolutionary structure of the algorithm and competition between different communicated hybrid algorithms, which are executed simultaneously, causes to efficient search. Using several machines for distributing the algorithms, at the iteration and solution levels, increases computational speed. The proposed algorithm is able to find the Pareto solutions of the big problems in shorter time than other algorithm in the literature. Apache Spark and Hadoop platforms have been used for the distribution of the algorithm. The suggested algorithm and implementations have been compared with results of the successful algorithms in the literature. Results prove the efficiency and high speed of the algorithm.Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, multi-objective optimization
Procedia PDF Downloads 36234 A Hybrid Distributed Algorithm for Solving Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a distributed hybrid algorithm is proposed for solving the job shop scheduling problem. The suggested method executes different artificial neural networks, heuristics and meta-heuristics simultaneously on more than one machine. The neural networks are used to control the constraints of the problem while the meta-heuristics search the global space and the heuristics are used to prevent the premature convergence. To attain an efficient distributed intelligent method for solving big and distributed job shop scheduling problems, Apache Spark and Hadoop frameworks are used. In the algorithm implementation and design steps, new approaches are applied. Comparison between the proposed algorithm and other efficient algorithms from the literature shows its efficiency, which is able to solve large size problems in short time.Keywords: distributed algorithms, Apache Spark, Hadoop, job shop scheduling, neural network
Procedia PDF Downloads 38633 A Hybrid Distributed Algorithm for Multi-Objective Dynamic Flexible Job Shop Scheduling Problem
Authors: Aydin Teymourifar, Gurkan Ozturk
Abstract:
In this paper, a hybrid distributed algorithm has been suggested for multi-objective dynamic flexible job shop scheduling problem. The proposed algorithm is high level, in which several algorithms search the space on different machines simultaneously also it is a hybrid algorithm that takes advantages of the artificial intelligence, evolutionary and optimization methods. Distribution is done at different levels and new approaches are used for design of the algorithm. Apache spark and Hadoop frameworks have been used for the distribution of the algorithm. The Pareto optimality approach is used for solving the multi-objective benchmarks. The suggested algorithm that is able to solve large-size problems in short times has been compared with the successful algorithms of the literature. The results prove high speed and efficiency of the algorithm.Keywords: distributed algorithms, apache-spark, Hadoop, flexible dynamic job shop scheduling, multi-objective optimization
Procedia PDF Downloads 35332 Predictors of Glycaemic Variability and Its Association with Mortality in Critically Ill Patients with or without Diabetes
Authors: Haoming Ma, Guo Yu, Peiru Zhou
Abstract:
Background: Previous studies show that dysglycemia, mostly hyperglycemia, hypoglycemia and glycemic variability(GV), are associated with excess mortality in critically ill patients, especially those without diabetes. Glycemic variability is an increasingly important measure of glucose control in the intensive care unit (ICU) due to this association. However, there is limited data pertaining to the relationship between different clinical factors and glycemic variability and clinical outcomes categorized by their DM status. This retrospective study of 958 intensive care unit(ICU) patients was conducted to investigate the relationship between GV and outcome in critically ill patients and further to determine the significant factors that contribute to the glycemic variability. Aim: We hypothesize that the factors contributing to mortality and the glycemic variability are different from critically ill patients with or without diabetes. And the primary aim of this study was to determine which dysglycemia (hyperglycemia\hypoglycemia\glycemic variability) is independently associated with an increase in mortality among critically ill patients in different groups (DM/Non-DM). Secondary objectives were to further investigate any factors affecting the glycemic variability in two groups. Method: A total of 958 diabetic and non-diabetic patients with severe diseases in the ICU were selected for this retrospective analysis. The glycemic variability was defined as the coefficient of variation (CV) of blood glucose. The main outcome was death during hospitalization. The secondary outcome was GV. The logistic regression model was used to identify factors associated with mortality. The relationships between GV and other variables were investigated using linear regression analysis. Results: Information on age, APACHE II score, GV, gender, in-ICU treatment and nutrition was available for 958 subjects. Predictors remaining in the final logistic regression model for mortality were significantly different in DM/Non-DM groups. Glycemic variability was associated with an increase in mortality in both DM(odds ratio 1.05; 95%CI:1.03-1.08,p<0.001) or Non-DM group(odds ratio 1.07; 95%CI:1.03-1.11,p=0.002). For critically ill patients without diabetes, factors associated with glycemic variability included APACHE II score(regression coefficient, 95%CI:0.29,0.22-0.36,p<0.001), Mean BG(0.73,0.46-1.01,p<0.001), total parenteral nutrition(2.87,1.57-4.17,p<0.001), serum albumin(-0.18,-0.271 to -0.082,p<0.001), insulin treatment(2.18,0.81-3.55,p=0.002) and duration of ventilation(0.006,0.002-1.010,p=0.003).However, for diabetes patients, APACHE II score(0.203,0.096-0.310,p<0.001), mean BG(0.503,0.138-0.869,p=0.007) and duration of diabetes(0.167,0.033-0.301,p=0.015) remained as independent risk factors of GV. Conclusion: We found that the relation between dysglycemia and mortality is different in the diabetes and non-diabetes groups. And we confirm that GV was associated with excess mortality in DM or Non-DM patients. Furthermore, APACHE II score, Mean BG, total parenteral nutrition, serum albumin, insulin treatment and duration of ventilation were significantly associated with an increase in GV in Non-DM patients. While APACHE II score, mean BG and duration of diabetes (years) remained as independent risk factors of increased GV in DM patients. These findings provide important context for further prospective trials investigating the effect of different clinical factors in critically ill patients with or without diabetes.Keywords: diabetes, glycemic variability, predictors, severe disease
Procedia PDF Downloads 18831 Pattern of ICU Admission due to Drug Problems
Authors: Kamel Abd Elaziz Mohamed
Abstract:
Introduction: Drug related problems (DRPs) are of major concern, affecting patients of both sex. They impose considerable economic burden on the society and the health-care systems. Aim of the work: The aim of this work was to identify and categorize drug-related problems in adult intensive care unit. Patients and methods: The study was a prospective, observational study as eighty six patients were included. They were consecutively admitted to ICU through the emergency room or transferred from the general ward due to DRPs. Parameters included in the study as length of stay in ICU, need for cardiovascular support or mechanical ventilation, dialysis, as well as APACHE II score were recorded. Results: Drug related problems represent 3.6% of the total ICU admission. The median (range) of APACHE II score for 86 patients included in the study was 17 (10-23), and length of ICU stay was 2.4 (1.5-4.2) days. In 45 patients (52%), DRP was drug over dose (group 1), while other DRP was present in the other 41 patients (48%, group 11). Patients in group 1 were older (39 years versus 32 years in group 11), with significant impaired renal function. The need of inotropic drugs and mechanical ventilation as well as the length of stay (LOS) in ICU was significantly higher in group 1. There were no significant difference in GCS between both groups, however APACHE II score was significantly higher in group 1. Only four patients (4.6%) were admitted by suicidal attempt as well as three patients (3.4%) due to trauma drug-related admissions, all were in (group 1). Nineteen percent of the patients had drug related problem due to hypoglycaemic medication followed by tranquilizer (15%). Adverse drug effect followed by failure to receive medication were the most causes of drug problem in (group11).The total mortality rate was 4.6%, all of them were eventually non preventable. Conclusion: The critically ill patients admitted due to drug related problems represented a small proportion (3.6%) of admissions to the ICU. Hypoglycaemic medication was one of the most common causes of admission by drug related problems.Keywords: drug related problems, ICU, cost, safety
Procedia PDF Downloads 33230 Trauma Scores and Outcome Prediction After Chest Trauma
Authors: Mohamed Abo El Nasr, Mohamed Shoeib, Abdelhamid Abdelkhalik, Amro Serag
Abstract:
Background: Early assessment of severity of chest trauma, either blunt or penetrating is of critical importance in prediction of patient outcome. Different trauma scoring systems are widely available and are based on anatomical or physiological parameters to expect patient morbidity or mortality. Up till now, there is no ideal, universally accepted trauma score that could be applied in all trauma centers and is suitable for assessment of severity of chest trauma patients. Aim: Our aim was to compare various trauma scoring systems regarding their predictability of morbidity and mortality in chest trauma patients. Patients and Methods: This study was a prospective study including 400 patients with chest trauma who were managed at Tanta University Emergency Hospital, Egypt during a period of 2 years (March 2014 until March 2016). The patients were divided into 2 groups according to the mode of trauma: blunt or penetrating. The collected data included age, sex, hemodynamic status on admission, intrathoracic injuries, and associated extra-thoracic injuries. The patients outcome including mortality, need of thoracotomy, need for ICU admission, need for mechanical ventilation, length of hospital stay and the development of acute respiratory distress syndrome were also recorded. The relevant data were used to calculate the following trauma scores: 1. Anatomical scores including abbreviated injury scale (AIS), Injury severity score (ISS), New injury severity score (NISS) and Chest wall injury scale (CWIS). 2. Physiological scores including revised trauma score (RTS), Acute physiology and chronic health evaluation II (APACHE II) score. 3. Combined score including Trauma and injury severity score (TRISS ) and 4. Chest-Specific score Thoracic trauma severity score (TTSS). All these scores were analyzed statistically to detect their sensitivity, specificity and compared regarding their predictive power of mortality and morbidity in blunt and penetrating chest trauma patients. Results: The incidence of mortality was 3.75% (15/400). Eleven patients (11/230) died in blunt chest trauma group, while (4/170) patients died in penetrating trauma group. The mortality rate increased more than three folds to reach 13% (13/100) in patients with severe chest trauma (ISS of >16). The physiological scores APACHE II and RTS had the highest predictive value for mortality in both blunt and penetrating chest injuries. The physiological score APACHE II followed by the combined score TRISS were more predictive for intensive care admission in penetrating injuries while RTS was more predictive in blunt trauma. Also, RTS had a higher predictive value for expectation of need for mechanical ventilation followed by the combined score TRISS. APACHE II score was more predictive for the need of thoracotomy in penetrating injuries and the Chest-Specific score TTSS was higher in blunt injuries. The anatomical score ISS and TTSS score were more predictive for prolonged hospital stay in penetrating and blunt injuries respectively. Conclusion: Trauma scores including physiological parameters have a higher predictive power for mortality in both blunt and penetrating chest trauma. They are more suitable for assessment of injury severity and prediction of patients outcome.Keywords: chest trauma, trauma scores, blunt injuries, penetrating injuries
Procedia PDF Downloads 42129 Tracking and Classifying Client Interactions with Personal Coaches
Authors: Kartik Thakore, Anna-Roza Tamas, Adam Cole
Abstract:
The world health organization (WHO) reports that by 2030 more than 23.7 million deaths annually will be caused by Cardiovascular Diseases (CVDs); with a 2008 economic impact of $3.76 T. Metabolic syndrome is a disorder of multiple metabolic risk factors strongly indicated in the development of cardiovascular diseases. Guided lifestyle intervention driven by live coaching has been shown to have a positive impact on metabolic risk factors. Individuals’ path to improved (decreased) metabolic risk factors are driven by personal motivation and personalized messages delivered by coaches and augmented by technology. Using interactions captured between 400 individuals and 3 coaches over a program period of 500 days, a preliminary model was designed. A novel real time event tracking system was created to track and classify clients based on their genetic profile, baseline questionnaires and usage of a mobile application with live coaching sessions. Classification of clients and coaches was done using a support vector machines application build on Apache Spark, Stanford Natural Language Processing Library (SNLPL) and decision-modeling.Keywords: guided lifestyle intervention, metabolic risk factors, personal coaching, support vector machines application, Apache Spark, natural language processing
Procedia PDF Downloads 43228 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 18827 RA-Apriori: An Efficient and Faster MapReduce-Based Algorithm for Frequent Itemset Mining on Apache Flink
Authors: Sanjay Rathee, Arti Kashyap
Abstract:
Extraction of useful information from large datasets is one of the most important research problems. Association rule mining is one of the best methods for this purpose. Finding possible associations between items in large transaction based datasets (finding frequent patterns) is most important part of the association rule mining. There exist many algorithms to find frequent patterns but Apriori algorithm always remains a preferred choice due to its ease of implementation and natural tendency to be parallelized. Many single-machine based Apriori variants exist but massive amount of data available these days is above capacity of a single machine. Therefore, to meet the demands of this ever-growing huge data, there is a need of multiple machines based Apriori algorithm. For these types of distributed applications, MapReduce is a popular fault-tolerant framework. Hadoop is one of the best open-source software frameworks with MapReduce approach for distributed storage and distributed processing of huge datasets using clusters built from commodity hardware. However, heavy disk I/O operation at each iteration of a highly iterative algorithm like Apriori makes Hadoop inefficient. A number of MapReduce-based platforms are being developed for parallel computing in recent years. Among them, two platforms, namely, Spark and Flink have attracted a lot of attention because of their inbuilt support to distributed computations. Earlier we proposed a reduced- Apriori algorithm on Spark platform which outperforms parallel Apriori, one because of use of Spark and secondly because of the improvement we proposed in standard Apriori. Therefore, this work is a natural sequel of our work and targets on implementing, testing and benchmarking Apriori and Reduced-Apriori and our new algorithm ReducedAll-Apriori on Apache Flink and compares it with Spark implementation. Flink, a streaming dataflow engine, overcomes disk I/O bottlenecks in MapReduce, providing an ideal platform for distributed Apriori. Flink's pipelining based structure allows starting a next iteration as soon as partial results of earlier iteration are available. Therefore, there is no need to wait for all reducers result to start a next iteration. We conduct in-depth experiments to gain insight into the effectiveness, efficiency and scalability of the Apriori and RA-Apriori algorithm on Flink.Keywords: apriori, apache flink, Mapreduce, spark, Hadoop, R-Apriori, frequent itemset mining
Procedia PDF Downloads 29426 The Use of Venous Glucose, Serum Lactate and Base Deficit as Biochemical Predictors of Mortality in Polytraumatized Patients: Acomparative with Trauma and Injury Severity Score and Acute Physiology and Chronic Health Evalution IV
Authors: Osama Moustafa Zayed
Abstract:
Aim of the work: To evaluate the effectiveness of venous glucose, levels of serum lactate and base deficit in polytraumatized patients as simple parameters to predict the mortality in these patients. Compared to the predictive value of Trauma and injury severity (TRISS) and Acute Physiology And Chronic Health Evaluation IV (APACHE IV). Introduction: Trauma is a serious global health problem, accounting for approximately one in 10 deaths worldwide. Trauma accounts for 5 million deaths per year. Prediction of mortality in trauma patients is an important part of trauma care. Several trauma scores have been devised to predict injury severity and risk of mortality. The trauma and injury severity score (TRISS) was most common used. Regardless of the accuracy of trauma scores, is based on an anatomical description of every injury and cannot be assigned to the patients until a full diagnostic procedure has been performed. So we hypothesized that alterations in admission glucose, lactate levels and base deficit would be an early and easy rapid predictor of mortality. Patient and Method: a comparative cross-sectional study. 282 Polytraumatized patients attended to the Emergency Department(ED) of the Suez Canal university Hospital constituted. The period from 1/1/2012 to 1/4/2013 was included. Results: We found that the best cut off value of TRISS probability of survival score for prediction of mortality among poly-traumatized patients is = 90, with 77% sensitivity and 89% specificity using area under the ROC curve (0.89) at (95%CI). APACHE IV demonstrated 67% sensitivity and 95% specificity at 95% CI at cut off point 99. The best cutoff value of Random Blood Sugar (RBS) for prediction of mortality was>140 mg/dl, with 89%, sensitivity, 49% specificity. The best cut off value of base deficit for prediction of mortality was less than -5.6 with 64% sensitivity, 93% specificity. The best cutoff point of lactate for prediction of mortality was > 2.6 mmol/L with 92%, sensitivity, 42% specificity. Conclusion: According to our results from all evaluated predictors of mortality (laboratory and scores) and mortality based on the estimated cutoff values using ROC curves analysis, the highest risk of mortality was found using a cutoff value of 90 in TRISS score while with laboratory parameters the highest risk of mortality was with serum lactate > 2.6 . Although that all of the three parameter are accurate in predicting mortality in poly-traumatized patients and near with each other, as in serum lactate the area under the curve 0.82, in BD 0.79 and 0.77 in RBS.Keywords: APACHE IV, emergency department, polytraumatized patients, serum lactate
Procedia PDF Downloads 29225 Transferring Data from Glucometer to Mobile Device via Bluetooth with Arduino Technology
Authors: Tolga Hayit, Ucman Ergun, Ugur Fidan
Abstract:
Being healthy is undoubtedly an indispensable necessity for human life. With technological improvements, in the literature, various health monitoring and imaging systems have been developed to satisfy your health needs. In this context, the work of monitoring and recording the data of individual health monitoring data via wireless technology is also being part of these studies. Nowadays, mobile devices which are located in almost every house and which become indispensable of our life and have wireless technology infrastructure have an important place of making follow-up health everywhere and every time because these devices were using in the health monitoring systems. In this study, Arduino an open-source microcontroller card was used in which a sample sugar measuring device was connected in series. In this way, the glucose data (glucose ratio, time) obtained with the glucometer is transferred to the mobile device based on the Android operating system with the Bluetooth technology channel. A mobile application was developed using the Apache Cordova framework for listing data, presenting graphically and reading data over Arduino. Apache Cordova, HTML, Javascript and CSS are used in coding section. The data received from the glucometer is stored in the local database of the mobile device. It is intended that people can transfer their measurements to their mobile device by using wireless technology and access the graphical representations of their data. In this context, the aim of the study is to be able to perform health monitoring by using different wireless technologies in mobile devices that can respond to different wireless technologies at present. Thus, that will contribute the other works done in this area.Keywords: Arduino, Bluetooth, glucose measurement, mobile health monitoring
Procedia PDF Downloads 32224 Assessment of Delirium, It's Possible Risk Factors and Outcome in Patient Admitted in Medical Intensive Care Unit
Authors: Rupesh K. Chaudhary, Narinder P. Jain, Rajesh Mahajan, Rajat Manchanda
Abstract:
Introduction: Delirium is a complex, multifactorial neuropsychiatric syndrome comprising a broad range of cognitive and neurobehavioral symptoms. In critically ill patients, it may develop secondary to multiple predisposing factors. Although it can be transient and irreversible but if left untreated may lead to long term cognitive dysfunction. Early identification and assessment of risk factors usually help in appropriate management of delirium which in turn leads to decreased hospital stay, cost of therapy and mortality. Aim and Objective: Aim of the present study was to estimate the incidence of delirium using a validated scale in medical ICU patients and to determine the associated risk factors and outcomes. Material and Method: A prospective study in an 18-bed medical-intensive care unit (ICU) was undertaken. A total of 357 consecutive patients admitted to ICU for more than 24 hours were assessed. These patients were screened with the help of Confusion Assessment Method for Intensive Care Unit -CAM-ICU, Richmond Agitation and Sedation Scale, Screening Checklist for delirium and APACHE II. Appropiate statistical analysis was done to evaluate the risk factors influencing mortality in delirium. Results: Delirium occurred in 54.6% of 194 patients. Risk of delirium was independently associated with a history of hypertension, diabetes but not with severity of illness APACHE II score. Delirium was linked to longer ICU stay 13.08 ± 9.6 ver 7.07 ± 4.98 days, higher ICU mortality (35.8% % vs. 17.0%). Conclusion: Our study concluded that delirium poses a great risk factor in the outcome of the patient and carries high mortality, so a timely intervention helps in addressing these issues.Keywords: delirium, risk factors, outcome, intervention
Procedia PDF Downloads 16323 Effect of Incremental Forming Parameters on Titanium Alloys Properties
Authors: P. Homola, L. Novakova, V. Kafka, M. P. Oscoz
Abstract:
Shear spinning is closely related to the asymmetric incremental sheet forming (AISF) that could significantly reduce costs incurred by the fabrication of complex aeronautical components with a minimal environmental impact. The spinning experiments were carried out on commercially pure titanium (Ti-Gr2) and Ti-6Al-4V (Ti-Gr5) alloy. Three forming modes were used to characterize the titanium alloys properties from the point of view of different spinning parameters. The structure and properties of the materials were assessed by means of metallographic analyses and micro-hardness measurements. The highest value wall angle failure limit was achieved using spinning parameters mode for both materials. The feed rate effect was observed only in the samples from the Ti-Gr2 material, when a refinement of the grain microstructure with lower feed rate and higher tangential speed occurred. Ti-Gr5 alloy exhibited a decrease of the micro-hardness at higher straining due to recovery processes.Keywords: incremental forming, metallography, shear spinning, titanium alloys
Procedia PDF Downloads 23622 Effect of Structure on Properties of Incrementally Formed Titanium Alloy Sheets
Authors: Lucie Novakova, Petr Homola, Vaclav Kafka
Abstract:
Asymmetric incremental sheet forming (AISF) could significantly reduce costs incurred by the fabrication of complex industrial components with a minimal environmental impact. The AISF experiments were carried out on commercially pure titanium (Ti-Gr2), Timetal (15-3-3-3) alloy, and Ti-6Al-4V (Ti-Gr5) alloy. A special testing geometry was used to characterize the titanium alloys properties from the point of view of the forming zone and titanium structure effect. The structure and properties of the materials were assessed by means of metallographic analyses and microhardness measurements.The highest differences in the parameters assessed as a function of the sampling zone were observed in the case of alpha-phase Ti-Gr2at the expense of the most substantial sheet thinning occurrence. A springback causes a smaller stored deformation in Timetal (β alloy) resulting in less pronounced microstructure refinement and microhardness increase. Ti-6Al-4V alloy exhibited early failure due to its poor formability at ambient temperature.Keywords: incremental forming, metallography, hardness, titanium alloys
Procedia PDF Downloads 44921 Knowledge Reactor: A Contextual Computing Work in Progress for Eldercare
Authors: Scott N. Gerard, Aliza Heching, Susann M. Keohane, Samuel S. Adams
Abstract:
The world-wide population of people over 60 years of age is growing rapidly. The explosion is placing increasingly onerous demands on individual families, multiple industries and entire countries. Current, human-intensive approaches to eldercare are not sustainable, but IoT and AI technologies can help. The Knowledge Reactor (KR) is a contextual, data fusion engine built to address this and other similar problems. It fuses and centralizes IoT and System of Record/Engagement data into a reactive knowledge graph. Cognitive applications and services are constructed with its multiagent architecture. The KR can scale-up and scaledown, because it exploits container-based, horizontally scalable services for graph store (JanusGraph) and pub-sub (Kafka) technologies. While the KR can be applied to many domains that require IoT and AI technologies, this paper describes how the KR specifically supports the challenging domain of cognitive eldercare. Rule- and machine learning-based analytics infer activities of daily living from IoT sensor readings. KR scalability, adaptability, flexibility and usability are demonstrated.Keywords: ambient sensing, AI, artificial intelligence, eldercare, IoT, internet of things, knowledge graph
Procedia PDF Downloads 17420 Deployment of Attack Helicopters in Conventional Warfare: The Gulf War
Authors: Mehmet Karabekir
Abstract:
Attack helicopters (AHs) are usually deployed in conventional warfare to destroy armored and mechanized forces of enemy. In addition, AHs are able to perform various tasks in the deep, and close operations – intelligence, surveillance, reconnaissance, air assault operations, and search and rescue operations. Apache helicopters were properly employed in the Gulf Wars and contributed the success of campaign by destroying a large number of armored and mechanized vehicles of Iraq Army. The purpose of this article is to discuss the deployment of AHs in conventional warfare in the light of Gulf Wars. First, the employment of AHs in deep and close operations will be addressed regarding the doctrine. Second, the US armed forces AH-64 doctrinal and tactical usage will be argued in the 1st and 2nd Gulf Wars.Keywords: attack helicopter, conventional warfare, gulf wars
Procedia PDF Downloads 47319 From Two-Way to Multi-Way: A Comparative Study for Map-Reduce Join Algorithms
Authors: Marwa Hussien Mohamed, Mohamed Helmy Khafagy
Abstract:
Map-Reduce is a programming model which is widely used to extract valuable information from enormous volumes of data. Map-reduce designed to support heterogeneous datasets. Apache Hadoop map-reduce used extensively to uncover hidden pattern like data mining, SQL, etc. The most important operation for data analysis is joining operation. But, map-reduce framework does not directly support join algorithm. This paper explains and compares two-way and multi-way map-reduce join algorithms for map reduce also we implement MR join Algorithms and show the performance of each phase in MR join algorithms. Our experimental results show that map side join and map merge join in two-way join algorithms has the longest time according to preprocessing step sorting data and reduce side cascade join has the longest time at Multi-Way join algorithms.Keywords: Hadoop, MapReduce, multi-way join, two-way join, Ubuntu
Procedia PDF Downloads 48618 The Management Information System for Convenience Stores: Case Study in 7 Eleven Shop in Bangkok
Authors: Supattra Kanchanopast
Abstract:
The purpose of this research is to develop and design a management information system for 7 eleven shop in Bangkok. The system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management. The implementation of the MIS for the mini-mart shop, can lessen the amount of paperwork and reduce repeating tasks so it may decrease the capital of the business and support an extension of branches in the future as well.Keywords: convenience store, the management information system, inventory management, 7 eleven shop
Procedia PDF Downloads 48117 Sensor Data Analysis for a Large Mining Major
Authors: Sudipto Shanker Dasgupta
Abstract:
One of the largest mining companies wanted to look at health analytics for their driverless trucks. These trucks were the key to their supply chain logistics. The automated trucks had multi-level sub-assemblies which would send out sensor information. The use case that was worked on was to capture the sensor signal from the truck subcomponents and analyze the health of the trucks from repair and replacement purview. Open source software was used to stream the data into a clustered Hadoop setup in Amazon Web Services cloud and Apache Spark SQL was used to analyze the data. All of this was achieved through a 10 node amazon 32 core, 64 GB RAM setup real-time analytics was achieved on ‘300 million records’. To check the scalability of the system, the cluster was increased to 100 node setup. This talk will highlight how Open Source software was used to achieve the above use case and the insights on the high data throughput on a cloud set up.Keywords: streaming analytics, data science, big data, Hadoop, high throughput, sensor data
Procedia PDF Downloads 40316 Accounting Management Information System for Convenient Shop in Bangkok Thailand
Authors: Anocha Rojanapanich
Abstract:
The purpose of this research is to develop and design an accounting management information system for convenient shop in Bangkok Thailand. The study applied the System Development Life Cycle (SDLC) for development which began with study and analysis of current data, including the existing system. Then, the system was designed and developed to meet users’ requirements via the internet network by use of application software such as My SQL for database management, Product diversity, Apache HTTP Server for Web Server and PHP Hypertext Preprocessor for an interface between web server, database and users. The system was designed into two subsystems as the main system, or system for head office, and the branch system for branch shops. These consisted of three parts which are classified by user management as shop management, inventory management and Point of Sale (POS) management and importance of cost information for decision making also as well as.Keywords: accounting management information system, convenient shop, cost information for decision making system, development life cycle
Procedia PDF Downloads 41915 Intrusion Detection Based on Graph Oriented Big Data Analytics
Authors: Ahlem Abid, Farah Jemili
Abstract:
Intrusion detection has been the subject of numerous studies in industry and academia, but cyber security analysts always want greater precision and global threat analysis to secure their systems in cyberspace. To improve intrusion detection system, the visualisation of the security events in form of graphs and diagrams is important to improve the accuracy of alerts. In this paper, we propose an approach of an IDS based on cloud computing, big data technique and using a machine learning graph algorithm which can detect in real time different attacks as early as possible. We use the MAWILab intrusion detection dataset . We choose Microsoft Azure as a unified cloud environment to load our dataset on. We implement the k2 algorithm which is a graphical machine learning algorithm to classify attacks. Our system showed a good performance due to the graphical machine learning algorithm and spark structured streaming engine.Keywords: Apache Spark Streaming, Graph, Intrusion detection, k2 algorithm, Machine Learning, MAWILab, Microsoft Azure Cloud
Procedia PDF Downloads 14514 [Keynote]: No-Trust-Zone Architecture for Securing Supervisory Control and Data Acquisition
Authors: Michael Okeke, Andrew Blyth
Abstract:
Supervisory Control And Data Acquisition (SCADA) as the state of the art Industrial Control Systems (ICS) are used in many different critical infrastructures, from smart home to energy systems and from locomotives train system to planes. Security of SCADA systems is vital since many lives depend on it for daily activities and deviation from normal operation could be disastrous to the environment as well as lives. This paper describes how No-Trust-Zone (NTZ) architecture could be incorporated into SCADA Systems in order to reduce the chances of malicious intent. The architecture is made up of two distinctive parts which are; the field devices such as; sensors, PLCs pumps, and actuators. The second part of the architecture is designed following lambda architecture, which is made up of a detection algorithm based on Particle Swarm Optimization (PSO) and Hadoop framework for data processing and storage. Apache Spark will be a part of the lambda architecture for real-time analysis of packets for anomalies detection.Keywords: industrial control system (ics, no-trust-zone (ntz), particle swarm optimisation (pso), supervisory control and data acquisition (scada), swarm intelligence (SI)
Procedia PDF Downloads 34413 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 7512 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University
Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat
Abstract:
Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.Keywords: big data platforms, cloudera manager, Hadoop, MapReduce
Procedia PDF Downloads 357