Search results for: time series feature extraction
18280 Hierarchical Tree Long Short-Term Memory for Sentence Representations
Authors: Xiuying Wang, Changliang Li, Bo Xu
Abstract:
A fixed-length feature vector is required for many machine learning algorithms in NLP field. Word embeddings have been very successful at learning lexical information. However, they cannot capture the compositional meaning of sentences, which prevents them from a deeper understanding of language. In this paper, we introduce a novel hierarchical tree long short-term memory (HTLSTM) model that learns vector representations for sentences of arbitrary syntactic type and length. We propose to split one sentence into three hierarchies: short phrase, long phrase and full sentence level. The HTLSTM model gives our algorithm the potential to fully consider the hierarchical information and long-term dependencies of language. We design the experiments on both English and Chinese corpus to evaluate our model on sentiment analysis task. And the results show that our model outperforms several existing state of the art approaches significantly.Keywords: deep learning, hierarchical tree long short-term memory, sentence representation, sentiment analysis
Procedia PDF Downloads 34818279 Thinking Lean in ICU: A Time Motion Study Quantifying ICU Nurses’ Multitasking Time Allocation
Authors: Fatma Refaat Ahmed, PhD, RN. Assistant Professor, Department of Nursing, College of Health Sciences, University of Sharjah, UAE. ([email protected]). Sally Mohamed Farghaly, Nursing Administration Department, Faculty of Nursing, Alexandria University, Alexandria, Egypt. ([email protected])
Abstract:
Context: Intensive care unit (ICU) nurses often face pressure and constraints in their work, leading to the rationing of care when demands exceed available time and resources. Observations suggest that ICU nurses are frequently distracted from their core nursing roles by non-core tasks. This study aims to provide evidence on ICU nurses' multitasking activities and explore the association between nurses' personal and clinical characteristics and their time allocation. Research Aim: The aim of this study is to quantify the time spent by ICU nurses on multitasking activities and investigate the relationship between their personal and clinical characteristics and time allocation. Methodology: A self-observation form utilizing the "Diary" recording method was used to record the number of tasks performed by ICU nurses and the time allocated to each task category. Nurses also reported on the distractions encountered during their nursing activities. A convenience sample of 60 ICU nurses participated in the study, with each nurse observed for one nursing shift (6 hours), amounting to a total of 360 hours. The study was conducted in two ICUs within a university teaching hospital in Alexandria, Egypt. Findings: The results showed that ICU nurses completed 2,730 direct patient-related tasks and 1,037 indirect tasks during the 360-hour observation period. Nurses spent an average of 33.65 minutes on ventilator care-related tasks, 14.88 minutes on tube care-related tasks, and 10.77 minutes on inpatient care-related tasks. Additionally, nurses spent an average of 17.70 minutes on indirect care tasks per hour. The study identified correlations between nursing time and nurses' personal and clinical characteristics. Theoretical Importance: This study contributes to the existing research on ICU nurses' multitasking activities and their relationship with personal and clinical characteristics. The findings shed light on the significant time spent by ICU nurses on direct care for mechanically ventilated patients and the distractions that require attention from ICU managers. Data Collection: Data were collected using self-observation forms completed by participating ICU nurses. The forms recorded the number of tasks performed, the time allocated to each task category, and any distractions encountered during nursing activities. Analysis Procedures: The collected data were analyzed to quantify the time spent on different tasks by ICU nurses. Correlations were also examined between nursing time and nurses' personal and clinical characteristics. Question Addressed: This study addressed the question of how ICU nurses allocate their time across multitasking activities and whether there is an association between nurses' personal and clinical characteristics and time allocation. Conclusion: The findings of this study emphasize the need for a lean evaluation of ICU nurses' activities to identify and address potential gaps in patient care and distractions. Implementing lean techniques can improve efficiency, safety, clinical outcomes, and satisfaction for both patients and nurses, ultimately enhancing the quality of care and organizational performance in the ICU setting.Keywords: motion study, ICU nurse, lean, nursing time, multitasking activities
Procedia PDF Downloads 6718278 Competitive Advantage: Sustainable or Transient
Authors: Pallavi Thacker, H. P. Mathur
Abstract:
This paper tries to find out from the available literature the status of Competitive Advantage. It has been stated a number of times that firms must strive to attain sustainable competitive advantage; but is the concept of sustainability of advantage still valid in this new diversified and too-rapidly changing world? The paper reaches a conclusion that the answer is “no”. Gone is the time when once attained position could easily be retained forever or at-least for a substantial amount of time. We live in a time which is very much globalised. We are used to a high level of competition from all directions. Technological advances, developed human capital, flexibility and end number of factors make the sustenance of competitive advantage difficult. This paper analyses competitive advantage from the view points of Michael Porter (who talks about sustainability) and Rita Gunther McGrath (who says competitive advantage can no more be sustained). It uses many examples and evidences from papers, journals and news. A research in this area is very much required (especially in a developing country like India) so that industries, firms and people can find out the suitable strategies that match with the changing times.Keywords: competitive advantage, sustainable, transient, globalisation
Procedia PDF Downloads 31018277 Optimal Linear Quadratic Digital Tracker for the Discrete-Time Proper System with an Unknown Disturbance
Authors: Jason Sheng-Hong Tsai, Faezeh Ebrahimzadeh, Min-Ching Chung, Shu-Mei Guo, Leang-San Shieh, Tzong-Jiy Tsai, Li Wang
Abstract:
In this paper, we first construct a new state and disturbance estimator using discrete-time proportional plus integral observer to estimate the system state and the unknown external disturbance for the discrete-time system with an input-to-output direct-feedthrough term. Then, the generalized optimal linear quadratic digital tracker design is applied to construct a proportional plus integral observer-based tracker for the system with an unknown external disturbance to have a desired tracking performance. Finally, a numerical simulation is given to demonstrate the effectiveness of the new application of our proposed approach.Keywords: non-minimum phase system, optimal linear quadratic tracker, proportional plus integral observer, state and disturbance estimator
Procedia PDF Downloads 50118276 Possibility of Prediction of Death in SARS-Cov-2 Patients Using Coagulogram Analysis
Authors: Omonov Jahongir Mahmatkulovic
Abstract:
Purpose: To study the significance of D-dimer (DD), prothrombin time (PT), activated partial thromboplastin time (APTT), thrombin time (TT), and fibrinogen coagulation parameters (Fg) in predicting the course, severity and prognosis of COVID-19. Source and method of research: From September 15, 2021, to November 5, 2021, 93 patients aged 25 to 60 with suspected COVID-19, who are under inpatient treatment at the multidisciplinary clinic of the Tashkent Medical Academy, were retrospectively examined. DD, PT, APTT, and Fg were studied in dynamics and studied changes. Results: Coagulation disorders occurred in the early stages of COVID-19 infection with an increase in DD in 54 (58%) patients and an increase in Fg in 93 (100%) patients. DD and Fg levels are associated with the clinical classification. Of the 33 patients who died, 21 had an increase in DD in the first laboratory study, 27 had an increase in DD in the second and third laboratory studies, and 15 had an increase in PT in the third test. The results of the ROC analysis of mortality showed that the AUC DD was three times 0.721, 0.801, and 0.844, respectively; PT was 0.703, 0.845, and 0.972. (P<0:01). Conclusion”: Coagulation dysfunction is more common in patients with severe and critical conditions. DD and PT can be used as important predictors of mortality from COVID-19.Keywords: Covid19, DD, PT, Coagulogram analysis, APTT
Procedia PDF Downloads 10518275 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 14118274 Enrichment of the Antioxidant Activity of Decaffeinated Assam Green Tea by Herbal Plant: A Synergistic Effect
Authors: Abhijit Das, Runu Chakraborty
Abstract:
Tea is the most widely consumed beverage aside from water; it is grown in about 30 countries with a per capita worldwide consumption of approximately 0.12 liter per year. Green tea is of growing importance with its antioxidant contents associated with its health benefits. The various extraction methods can influence the polyphenol concentrations of green tea. The purpose of the study was to quantify the polyphenols, flavonoid and antioxidant activity of both caffeinated and decaffeinated form of tea manufactured commercially in Assam, North Eastern part of India. The results display that phenolic/flavonoid content well correlated with antioxidant activity which was performed by DPPH (2,2-diphenyl-1-picrylhydrazyl) and FRAP (Ferric reducing ability of plasma) assay. After decaffeination there is a decrease in the polyphenols concentration which also affects the antioxidant activity of green tea. For the enrichment of antioxidant activity of decaffeinated tea a herbal plant extract is used which shows a synergistic effect between green tea and herbal plant phenolic compounds.Keywords: antioxidant activity, decaffeination, green tea, flavonoid content, phenolic content, plant extract
Procedia PDF Downloads 34518273 Personal Information Classification Based on Deep Learning in Automatic Form Filling System
Authors: Shunzuo Wu, Xudong Luo, Yuanxiu Liao
Abstract:
Recently, the rapid development of deep learning makes artificial intelligence (AI) penetrate into many fields, replacing manual work there. In particular, AI systems also become a research focus in the field of automatic office. To meet real needs in automatic officiating, in this paper we develop an automatic form filling system. Specifically, it uses two classical neural network models and several word embedding models to classify various relevant information elicited from the Internet. When training the neural network models, we use less noisy and balanced data for training. We conduct a series of experiments to test my systems and the results show that our system can achieve better classification results.Keywords: artificial intelligence and office, NLP, deep learning, text classification
Procedia PDF Downloads 19818272 Design and Optimization Fire Alarm System to Protect Gas Condensate Reservoirs With the Use of Nano-Technology
Authors: Hefzollah Mohammadian, Ensieh Hajeb, Mohamad Baqer Heidari
Abstract:
In this paper, for the protection and safety of tanks gases (flammable materials) and also due to the considerable economic value of the reservoir, the new system for the protection, the conservation and fire fighting has been cloned. The system consists of several parts: the Sensors to detect heat and fire with Nanotechnology (nano sensor), Barrier for isolation and protection from a range of two electronic zones, analyzer for detection and locating point of fire accurately, Main electronic board to announce fire, Fault diagnosis in different locations, such as relevant alarms and activate different devices for fire distinguish and announcement. An important feature of this system, high speed and capability of fire detection system in a way that is able to detect the value of the ambient temperature that can be adjusted. Another advantage of this system is autonomous and does not require human operator in place. Using nanotechnology, in addition to speeding up the work, reduces the cost of construction of the sensor and also the notification system and fire extinguish.Keywords: analyser, barrier, heat resistance, general fault, general alarm, nano sensor
Procedia PDF Downloads 45418271 Monitoring Public Transportation in Developing Countries Using Automatic Vehicle Location System: A Case Study
Authors: Ahmed Osama, Hassan A. Mahdy, Khalid A. Kandil, Mohamed Elhabiby
Abstract:
Automatic Vehicle Location systems (AVL) have been used worldwide for more than twenty years and have showed great success in public transportation management and monitoring. Cairo public bus service suffers from several problems such as unscheduled stops, unscheduled route deviations, and inaccurate schedules, which have negative impacts on service reliability. This research aims to study those problems for a selected bus route in Cairo using a prototype AVL system. Experimental trips were run on the selected route; and the locations of unscheduled stops, regions of unscheduled deviations, along with other trip time and speed data were collected. Data was analyzed to demonstrate the reliability of passengers on the unscheduled stops compared to the scheduled ones. Trip time was also modeled to assess the unscheduled stops’ impact on trip time, and to check the accuracy of the applied scheduled trip time. Moreover, frequency and length of the unscheduled route deviations, as well as their impact on the bus stops, were illustrated. Solutions were proposed for the bus service deficiencies using the AVL system. Finally, recommendations were proposed for further research.Keywords: automatic vehicle location, public transportation, unscheduled stops, unscheduled route deviations, inaccurate schedule
Procedia PDF Downloads 38818270 Multi-Objective Simulated Annealing Algorithms for Scheduling Just-In-Time Assembly Lines
Authors: Ghorbanali Mohammadi
Abstract:
New approaches to sequencing mixed-model manufacturing systems are present. These approaches have attracted considerable attention due to their potential to deal with difficult optimization problems. This paper presents Multi-Objective Simulated Annealing Algorithms (MOSAA) approaches to the Just-In-Time (JIT) sequencing problem where workload-smoothing (WL) and the number of set-ups (St) are to be optimized simultaneously. Mixed-model assembly lines are types of production lines where varieties of product models similar in product characteristics are assembled. Moreover, this type of problem is NP-hard. Two annealing methods are proposed to solve the multi-objective problem and find an efficient frontier of all design configurations. The performances of the two methods are tested on several problems from the literature. Experimentation demonstrates the relative desirable performance of the presented methodology.Keywords: scheduling, just-in-time, mixed-model assembly line, sequencing, simulated annealing
Procedia PDF Downloads 12718269 A Quick Method for Seismic Vulnerability Evaluation of Offshore Structures by Static and Dynamic Nonlinear Analyses
Authors: Somayyeh Karimiyan
Abstract:
To evaluate the seismic vulnerability of vital offshore structures with the highest possible precision, Nonlinear Time History Analyses (NLTHA), is the most reliable method. However, since it is very time-consuming, a quick procedure is greatly desired. This paper presents a quick method by combining the Push Over Analysis (POA) and the NLTHA. The POA is preformed first to recognize the more critical members, and then the NLTHA is performed to evaluate more precisely the critical members’ vulnerability. The proposed method has been applied to jacket type structure. Results show that combining POA and NLTHA is a reliable seismic evaluation method, and also that none of the earthquake characteristics alone, can be a dominant factor in vulnerability evaluation.Keywords: jacket structure, seismic evaluation, push-over and nonlinear time history analyses, critical members
Procedia PDF Downloads 27918268 Machine Learning Framework: Competitive Intelligence and Key Drivers Identification of Market Share Trends among Healthcare Facilities
Authors: Anudeep Appe, Bhanu Poluparthi, Lakshmi Kasivajjula, Udai Mv, Sobha Bagadi, Punya Modi, Aditya Singh, Hemanth Gunupudi, Spenser Troiano, Jeff Paul, Justin Stovall, Justin Yamamoto
Abstract:
The necessity of data-driven decisions in healthcare strategy formulation is rapidly increasing. A reliable framework which helps identify factors impacting a healthcare provider facility or a hospital (from here on termed as facility) market share is of key importance. This pilot study aims at developing a data-driven machine learning-regression framework which aids strategists in formulating key decisions to improve the facility’s market share which in turn impacts in improving the quality of healthcare services. The US (United States) healthcare business is chosen for the study, and the data spanning 60 key facilities in Washington State and about 3 years of historical data is considered. In the current analysis, market share is termed as the ratio of the facility’s encounters to the total encounters among the group of potential competitor facilities. The current study proposes a two-pronged approach of competitor identification and regression approach to evaluate and predict market share, respectively. Leveraged model agnostic technique, SHAP, to quantify the relative importance of features impacting the market share. Typical techniques in literature to quantify the degree of competitiveness among facilities use an empirical method to calculate a competitive factor to interpret the severity of competition. The proposed method identifies a pool of competitors, develops Directed Acyclic Graphs (DAGs) and feature level word vectors, and evaluates the key connected components at the facility level. This technique is robust since its data-driven, which minimizes the bias from empirical techniques. The DAGs factor in partial correlations at various segregations and key demographics of facilities along with a placeholder to factor in various business rules (for ex. quantifying the patient exchanges, provider references, and sister facilities). Identified are the multiple groups of competitors among facilities. Leveraging the competitors' identified developed and fine-tuned Random Forest Regression model to predict the market share. To identify key drivers of market share at an overall level, permutation feature importance of the attributes was calculated. For relative quantification of features at a facility level, incorporated SHAP (SHapley Additive exPlanations), a model agnostic explainer. This helped to identify and rank the attributes at each facility which impacts the market share. This approach proposes an amalgamation of the two popular and efficient modeling practices, viz., machine learning with graphs and tree-based regression techniques to reduce the bias. With these, we helped to drive strategic business decisions.Keywords: competition, DAGs, facility, healthcare, machine learning, market share, random forest, SHAP
Procedia PDF Downloads 8918267 Viscoelastic Modeling of Hot Mix Asphalt (HMA) under Repeated Loading by Using Finite Element Method
Authors: S. A. Tabatabaei, S. Aarabi
Abstract:
Predicting the hot mix asphalt (HMA) response and performance is a challenging task because of the subjectivity of HMA under the complex loading and environmental condition. The behavior of HMA is a function of temperature of loading and also shows the time and rate-dependent behavior directly affecting design criteria of mixture. Velocity of load passing make the time and rate. The viscoelasticity illustrates the reaction of HMA under loading and environmental conditions such as temperature and moisture effect. The behavior has direct effect on design criteria such as tensional strain and vertical deflection. In this paper, the computational framework for viscoelasticity and implementation in 3D dimensional HMA model is introduced to use in finite element method. The model was lied under various repeated loading conditions at constant temperature. The response of HMA viscoelastic behavior is investigated in loading condition under speed vehicle and sensitivity of behavior to the range of speed and compared to HMA which is supposed to have elastic behavior as in conventional design methods. The results show the importance of loading time pulse, unloading time and various speeds on design criteria. Also the importance of memory fading of material to storing the strain and stress due to repeated loading was shown. The model was simulated by ABAQUS finite element packageKeywords: viscoelasticity, finite element method, repeated loading, HMA
Procedia PDF Downloads 39618266 Index t-SNE: Tracking Dynamics of High-Dimensional Datasets with Coherent Embeddings
Authors: Gaelle Candel, David Naccache
Abstract:
t-SNE is an embedding method that the data science community has widely used. It helps two main tasks: to display results by coloring items according to the item class or feature value; and for forensic, giving a first overview of the dataset distribution. Two interesting characteristics of t-SNE are the structure preservation property and the answer to the crowding problem, where all neighbors in high dimensional space cannot be represented correctly in low dimensional space. t-SNE preserves the local neighborhood, and similar items are nicely spaced by adjusting to the local density. These two characteristics produce a meaningful representation, where the cluster area is proportional to its size in number, and relationships between clusters are materialized by closeness on the embedding. This algorithm is non-parametric. The transformation from a high to low dimensional space is described but not learned. Two initializations of the algorithm would lead to two different embeddings. In a forensic approach, analysts would like to compare two or more datasets using their embedding. A naive approach would be to embed all datasets together. However, this process is costly as the complexity of t-SNE is quadratic and would be infeasible for too many datasets. Another approach would be to learn a parametric model over an embedding built with a subset of data. While this approach is highly scalable, points could be mapped at the same exact position, making them indistinguishable. This type of model would be unable to adapt to new outliers nor concept drift. This paper presents a methodology to reuse an embedding to create a new one, where cluster positions are preserved. The optimization process minimizes two costs, one relative to the embedding shape and the second relative to the support embedding’ match. The embedding with the support process can be repeated more than once, with the newly obtained embedding. The successive embedding can be used to study the impact of one variable over the dataset distribution or monitor changes over time. This method has the same complexity as t-SNE per embedding, and memory requirements are only doubled. For a dataset of n elements sorted and split into k subsets, the total embedding complexity would be reduced from O(n²) to O(n²=k), and the memory requirement from n² to 2(n=k)², which enables computation on recent laptops. The method showed promising results on a real-world dataset, allowing to observe the birth, evolution, and death of clusters. The proposed approach facilitates identifying significant trends and changes, which empowers the monitoring high dimensional datasets’ dynamics.Keywords: concept drift, data visualization, dimension reduction, embedding, monitoring, reusability, t-SNE, unsupervised learning
Procedia PDF Downloads 14118265 Mechanism for Network Security via Routing Protocols Estimated with Network Simulator 2 (NS-2)
Authors: Rashid Mahmood, Muhammad Sufyan, Nasir Ahmed
Abstract:
The MANETs have lessened transportation and decentralized network. There are numerous basis of routing protocols. We derived the MANETs protocol into three major categories like Reactive, Proactive and hybrid. In these protocols, we discussed only some protocols like Distance Sequenced Distance Vector (DSDV), Ad hoc on Demand Distance Vector (AODV) and Dynamic Source Routing (DSR). The AODV and DSR are both reactive type of protocols. On the other hand, DSDV is proactive type protocol here. We compare these routing protocols for network security estimated by network simulator (NS-2). In this dissertation some parameters discussed such as simulation time, packet size, number of node, packet delivery fraction, push time and speed etc. We will construct all these parameters on routing protocols under suitable conditions for network security measures.Keywords: DSDV, AODV, DSR NS-2, PDF, push time
Procedia PDF Downloads 43118264 Polygenetic Iron Mineralization in the Baba-Ali and Galali Deposits, Further Evidences from Stable (S, O, H) Isotope Data, NW Hamedan, Iran
Authors: Ghodratollah Rostami Paydar
Abstract:
The Baba-Ali and Galali iron deposits are located in northwest Hamedan and the Iranian Sanandaj-Sirjan geological structural zone. The host rocks of these deposits are metavolcanosedimentary successions of Songhor stratigraphic series with permo-trriassic age. Field investigation, ore geometry, textures and structures and paragenetic sequence of minerals, all indicate that the ore minerals are crystallized in four stages: primary volcanosedimentary stage, secondary regional metamorphism with formation of ductile shear zones, contact metamorphism and metasomatism stage and the finally late hydrothermal mineralization within uplift and exposure. Totally 29 samples of sulfide, oxide-silicate and carbonate minerals of iron orees and gangue has been purified for stable isotope analysis. The isotope ratio data assure that occurrence of dynamothermal metamorphism in these areas typically involves a lengthy period of time, which results in a tendency toward isotopic homogenization specifically in O and H stable isotopes and showing the role of metamorphic waters in mineralization process. Measurement of δ34S (CDT) in first generation of pyrite is higher than another ones, so it confirms the volcanogenic origin of primary iron mineralization. δ13C data measurements in Galali carbonate country rocks show a marine origin. δ18O in magnetite and skarn forming silicates, δ18O and δ13C in limestone and skarn calcite and δ34S in sulphides are all consistent with the interaction of a magmatic-equilibrated fluid with Galali limestone, and a dominantly magmatic source for S. All these data imply skarn formation and mineralisation in a magmatic-hydrothermal system that maintained high salinity to relatively late stages resulting in the formation of the regional Na metasomatic alteration halo. Late stage hydrothermal quartz-calcite veinlets are important for gold mineralization, but the economic evaluation is required to detailed geochemical studies.Keywords: iron, polygenetic, stable isotope, BabaAli, Galali
Procedia PDF Downloads 30018263 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model
Authors: Tanu Khanuja, Harikrishnan N. Unni
Abstract:
Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress
Procedia PDF Downloads 15918262 Impact of Natural Degradation of Low Density Polyethylene on Its Morphology
Authors: Meryem Imane Babaghayou, Asma Abdelhafidi, Salem Fouad Chabira, Mohammed Sebaa
Abstract:
A challenge of plastics industries is the realization of materials that resist the degradation in its application environment, and that to guarantee a longer life time therefore an optimal time of use. Blown extruded films of low-density polyethylene (LDPE) supplied by SABIC SAUDI ARABIA blown and extruded in SOFIPLAST company in Setif ALGERIA , have been subjected to climatic ageing in a sub-Saharan facility at Laghouat (Algeria) with direct exposure to sun. Samples were characterized by X-ray diffraction (XRD) and differential scanning calorimetry (DSC) techniques after prescribed amounts of time up to 8 months. It has been shown via these two techniques the impact of UV irradiation on the morphological development of a plastic material, especially the crystallinity degree which increases with exposure time. The reason of these morphological changes is related to photooxidative reactions leading to cross linking in the beginning and to chain scissions for an advanced stage of ageing this last ones are the first responsible. The crystallinity degree change is essentially controlled by the secondary crystallization of the amorphous chains whose mobility is enhanced by the chain scission processes. The diffusion of these short segments integrates the surface of the lamellae increasing in this way their thicknesses. The results presented highlight the complexity of the involved phenomena.Keywords: Low Density poly (Ethylene), crystallinity, ageing, XRD, DSC
Procedia PDF Downloads 40618261 Application of Advanced Remote Sensing Data in Mineral Exploration in the Vicinity of Heavy Dense Forest Cover Area of Jharkhand and Odisha State Mining Area
Authors: Hemant Kumar, R. N. K. Sharma, A. P. Krishna
Abstract:
The study has been carried out on the Saranda in Jharkhand and a part of Odisha state. Geospatial data of Hyperion, a remote sensing satellite, have been used. This study has used a wide variety of patterns related to image processing to enhance and extract the mining class of Fe and Mn ores.Landsat-8, OLI sensor data have also been used to correctly explore related minerals. In this way, various processes have been applied to increase the mineralogy class and comparative evaluation with related frequency done. The Hyperion dataset for hyperspectral remote sensing has been specifically verified as an effective tool for mineral or rock information extraction within the band range of shortwave infrared used. The abundant spatial and spectral information contained in hyperspectral images enables the differentiation of different objects of any object into targeted applications for exploration such as exploration detection, mining.Keywords: Hyperion, hyperspectral, sensor, Landsat-8
Procedia PDF Downloads 12218260 The Effects of Drying Technology on Rehydration Time and Quality of Mung Bean Vermicelli
Authors: N. P. Tien, S. Songsermpong, T. H. Quan
Abstract:
Mung bean vermicelli is a popular food in Asian countries and is made from mung bean starch. The preparation process involves several steps, including drying, which affects the structure and quality of the vermicelli. This study aims to examine the effects of different drying technologies on the rehydration time and quality of mung bean vermicelli. Three drying technologies, namely hot air drying, microwave continuous drying, and microwave vacuum drying, were used for the drying process. The vermicelli strands were dried at 45°C for 12h in a hot air dryer, at 70 Hz of conveyor belt speed inverter in a microwave continuous dryer, and at 30 W.g⁻¹ of microwave power density in a microwave vacuum dryer. The results showed that mung bean vermicelli dried using hot air drying had the longest rehydration time of 12.69 minutes. On the other hand, vermicelli dried through microwave continuous drying and microwave vacuum drying had shorter rehydration times of 2.79 minutes and 2.14 minutes, respectively. Microwave vacuum drying also resulted in larger porosity, higher water absorption, and cooking loss. The tensile strength and elasticity of vermicelli dried using hot air drying were higher compared to microwave drying technologies. The sensory evaluation did not reveal significant differences in most attributes among the vermicelli treatments. Overall, microwave drying technology proved to be effective in reducing rehydration time and producing good-quality mung bean vermicelli.Keywords: mung bean vermicelli, drying, hot air, microwave continuous, microwave vacuum
Procedia PDF Downloads 7618259 Achieving Sustainable Rapid Construction Using Lean Principles
Authors: Muhamad Azani Yahya, Vikneswaran Munikanan, Mohammed Alias Yusof
Abstract:
There is the need to take the holistic approach in achieving sustainable construction for a contemporary practice. Sustainable construction is the practice that involved method of human preservation of the environment, whether economically or socially through responsibility, management of resources and maintenance utilizing support. This paper shows the correlation of achieving rapid construction with sustainable concepts using lean principles. Lean principles being used widely in the manufacturing industry, but this research will demonstrate the principles into building construction. Lean principle offers the benefits of stabilizing work flow and elimination of unnecessary work. Therefore, this principle contributes to time and waste reduction. The correlation shows that pulling factor provides the improvement of progress curve and stabilizing the time-quality relation. The finding shows the lean principles offer the elements of rapid construction synchronized with the elements of sustainability.Keywords: sustainable construction, rapid construction, time reduction, lean construction
Procedia PDF Downloads 23518258 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures
Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.
Abstract:
Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.Keywords: CFD, reacting flow, DDT, gas explosion
Procedia PDF Downloads 8818257 Development of an in vitro Fermentation Chicken Ileum Microbiota Model
Authors: Bello Gonzalez, Setten Van M., Brouwer M.
Abstract:
The chicken small intestine represents a dynamic and complex organ in which the enzymatic digestion and absorption of nutrients take place. The development of an in vitro fermentation chicken small intestinal model could be used as an alternative to explore the interaction between the microbiota and nutrient metabolism and to enhance the efficacy of targeting interventions to improve animal health. In the present study we have developed an in vitro fermentation chicken ileum microbiota model for unrevealing the complex interaction of ileum microbial community under physiological conditions. A two-vessel continuous fermentation process simulating in real-time the physiological conditions of the ileum content (pH, temperature, microaerophilic/anoxic conditions, and peristaltic movements) has been standardized as a proof of concept. As inoculum, we use a pool of ileum microbial community obtained from chicken broilers at the age of day 14. The development and validation of the model provide insight into the initial characterization of the ileum microbial community and its dynamics over time-related to nutrient assimilation and fermentation. Samples can be collected at different time points and can be used to determine the microbial compositional structure, dynamics, and diversity over time. The results of studies using this in vitro model will serve as the foundation for the development of a whole small intestine in vitro fermentation chicken gastrointestinal model to complement our already established in vitro fermentation chicken caeca model. The insight gained from this model could provide us with some information about the nutritional strategies to restore and maintain chicken gut homeostasis. Moreover, the in vitro fermentation model will also allow us to study relationships between gut microbiota composition and its dynamics over time associated with nutrients, antimicrobial compounds, and disease modelling.Keywords: broilers, in vitro model, ileum microbiota, fermentation
Procedia PDF Downloads 5618256 Restoration and Conservation of Historical Textiles Using Covalently Immobilized Enzymes on Nanoparticles
Authors: Mohamed Elbehery
Abstract:
Historical textiles in the burial environment or in museums are exposed to many types of stains and dirt that are associated with historical textiles by multiple chemical bonds that cause damage to historical textiles. The cleaning process must be carried out with great care, with no irreversible damage, and sediments removed without affecting the original material of the surface being cleaned. Science and technology continue to provide innovative systems in the bio-cleaning process (using pure enzymes) of historical textiles and artistic surfaces. Lipase and α-amylase were immobilized on nanoparticles of alginate/κ-carrageenan nanoparticle complex and used in historical textiles cleaning. Preparation of nanoparticles, activation, and enzymes immobilization were characterized. Optimization of loading time and units of the two enzymes were done. It was found that, the optimum time and units of amylase were 4 hrs and 25U, respectively. While, the optimum time and units of lipase were 3 hrs and 15U, respectively. The methods used to examine the fibers using a scanning electron microscope equipped with an X-ray energy dispersal unit: SEM with EDX unit.Keywords: nanoparticles, enzymes, immobilization, textiles
Procedia PDF Downloads 9718255 Nanocrystalline Na0.1V2O5.nH2Oxerogel Thin Film for Gas Sensing
Authors: M. S. Al-Assiri, M. M. El-Desoky, A. A. Bahgat
Abstract:
Nanocrystalline thin film of Na0.1V2O5.nH2O xerogel obtained by sol-gel synthesis was used as a gas sensor. Gas sensing properties of different gases such as hydrogen, petroleum and humidity were investigated. Applying XRD and TEM the size of the nanocrystals is found to be 7.5 nm. SEM shows a highly porous structure with submicron meter-sized voids present throughout the sample. FTIR measurement shows different chemical groups identifying the obtained series of gels. The sample was n-type semiconductor according to the thermoelectric power and electrical conductivity. It can be seen that the sensor response curves from 130°C to 150°C show a rapid increase in sensitivity for all types of gas injection, low response values for heating period and the rapid high response values for cooling period. This result may suggest that this material is able to act as gas sensor during the heating and cooling process.Keywords: sol-gel, thermoelectric power, XRD, TEM, gas sensing
Procedia PDF Downloads 30218254 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards
Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia
Abstract:
Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.Keywords: aquaponics, deep learning, internet of things, vermiponics
Procedia PDF Downloads 6918253 An Investigation on Smartphone-Based Machine Vision System for Inspection
Authors: They Shao Peng
Abstract:
Machine vision system for inspection is an automated technology that is normally utilized to analyze items on the production line for quality control purposes, it also can be known as an automated visual inspection (AVI) system. By applying automated visual inspection, the existence of items, defects, contaminants, flaws, and other irregularities in manufactured products can be easily detected in a short time and accurately. However, AVI systems are still inflexible and expensive due to their uniqueness for a specific task and consuming a lot of set-up time and space. With the rapid development of mobile devices, smartphones can be an alternative device for the visual system to solve the existing problems of AVI. Since the smartphone-based AVI system is still at a nascent stage, this led to the motivation to investigate the smartphone-based AVI system. This study is aimed to provide a low-cost AVI system with high efficiency and flexibility. In this project, the object detection models, which are You Only Look Once (YOLO) model and Single Shot MultiBox Detector (SSD) model, are trained, evaluated, and integrated with the smartphone and webcam devices. The performance of the smartphone-based AVI is compared with the webcam-based AVI according to the precision and inference time in this study. Additionally, a mobile application is developed which allows users to implement real-time object detection and object detection from image storage.Keywords: automated visual inspection, deep learning, machine vision, mobile application
Procedia PDF Downloads 12218252 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 20118251 Differences Choosing Closed Approach or Open Approach in Rhinoplasty Outcomes
Authors: Alessandro Marano
Abstract:
Aim: The author describes a strategy for choosing between two different rhinoplasty approaches for outcomes treatment. Methods: Series of the case study. There are advantages and disadvantages on both approaches for rhinoplasty. On the side of the open approach, we are be able to better manage the techniques for shaping and restoring nasal structures in rhinoplasty outcomes; on the other side, the closed approach requires more practice and experience to achieve good results. Results: Author’s choice is the closed approach on rhinoplasty outcomes. Anyway, the open approach is most commonly preferred due to superior management and better vision on nasal structures. Conclusions: Both approaches are valid for the treatment of rhinoplasty outcomes, author's preferred approach is closed, with minimally invasive modification focused on restoring outcomes in nasal function and aesthetics.Keywords: rhinoplasty, aesthetic, face, outcomes
Procedia PDF Downloads 108