Search results for: arrival time prediction
16725 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability
Procedia PDF Downloads 27816724 Adapting an Accurate Reverse-time Migration Method to USCT Imaging
Authors: Brayden Mi
Abstract:
Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation
Procedia PDF Downloads 7416723 Effect of Time on Stream on the Performances of Plasma Assisted Fe-Doped Cryptomelanes in Trichloroethylene (TCE) Oxidation
Authors: Sharmin Sultana, Nicolas Nuns, Pardis Simon, Jean-Marc Giraudon, Jean-Francois Lamonior, Nathalie D. Geyter, Rino Morent
Abstract:
Environmental issues, especially air pollution, have become a huge concern of environmental legislation as a consequence of growing awareness in our global world. In this regard, control of volatile organic compounds (VOCs) emission has become an important issue due to their potential toxicity, carcinogenicity, and mutagenicity. The research of innovative technologies for VOC abatement is stimulated to accommodate the new stringent standards in terms of VOC emission. One emerging strategy is the coupling of 2 existing complementary technologies, namely here non-thermal plasma (NTP) and heterogeneous catalysis, to get a more efficient process for VOC removal in air. The objective of this current work is to investigate the abatement of trichloroethylene (TCE-highly toxic chlorinated VOC) from moist air (RH=15%) as a function of time by combined use of multi-pin-to-plate negative DC corona/glow discharge with Fe-doped cryptomelanes catalyst downstream i.e. post plasma-catalysis (PPC) process. For catalyst alone case, experiments reveal that, initially, Fe doped cryptomelane (regardless the mode of Fe incorporation by co-precipitation (Fe-K-OMS-2)/ impregnation (Fe/K-OMS-2)) exhibits excellent activity to decompose TCE compared to cryptomelane (K-OMS-2) itself. A maximum obtained value of TCE abatement after 6 min is as follows: Fe-KOMS-2 (73.3%) > Fe/KOMS-2 (48.5) > KOMS-2 (22.6%). However, with prolonged operation time, whatever the catalyst under concern, the abatement of TCE decreases. After 111 min time of exposure, the catalysts can be ranked as follows: Fe/KOMS-2 (11%) < K-OMS-2 (12.3%) < Fe-KOMS-2 (14.5%). Clearly, this phenomenon indicates catalyst deactivation either by chlorination or by blocking the active sites. Remarkably, in PPC configuration (energy density = 60 J/L, catalyst temperature = 150°C), experiments reveal an enhanced performance towards TCE removal regardless the type of catalyst. After 6 min time on stream, the TCE removal efficiency amount as follows: K-OMS-2 (60%) < Fe/K-OMS-2 (79%) < Fe-K-OMS-2 (99.3%). The enhanced performances over Fe-K-OMS-2 catalyst are attributed to its high surface oxygen mobility and structural defects leading to high O₃ decomposition efficiency to give active species able to oxidize the plasma processed hazardous\by-products and the possibly remaining VOC into CO₂. Moreover, both undoped and doped catalysts remain strongly capable to abate TCE with time on stream. The TCE removal efficiencies of the PPC processes with Fe/KOMS-2 and KOMS-2 catalysts are not affected by time on stream indicating an excellent catalyst stability. When using the Fe-K-OMS-2 as catalyst, TCE abatement slightly reduces with time on stream. However, it is noteworthy to stress that still a constant abatement of 83% is observed during at least 30 minutes. These results prove that the combination of NTP with catalysts not only increases the catalytic activity but also allows to avoid, to some extent, the poisoning of catalytic sites resulting in an enhanced catalyst stability. In order to better understand the different surface processes occurring in the course of the total TCE oxidation in PPC experiments, a detailed X-ray Photoelectron Spectroscopy (XPS) and Time of Flight-Secondary Ion Mass Spectrometry (ToF-SIMS) study on the fresh and used catalysts is in progress.Keywords: Fe doped cryptomelane, non-thermal plasma, plasma-catalysis, stability, trichloroethylene
Procedia PDF Downloads 20816722 Effects of Computer-Mediated Dictionaries on Reading Comprehension and Vocabulary Acquisition
Authors: Mohamed Amin Mekheimer
Abstract:
This study aimed to investigate the effects of paper-based monolingual, pop-up and type-in electronic dictionaries on improving reading comprehension and incidental vocabulary acquisition and retention in an EFL context. It tapped into how computer-mediated dictionaries may have facilitated/impeded reading comprehension and vocabulary acquisition. Findings showed differential effects produced by the three treatments compared with the control group. Specifically, it revealed that the pop-up dictionary condition had the shortest average vocabulary searching time, vocabulary and text reading time, yet with less than the type-in dictionary group but more than the book dictionary group in terms of frequent dictionary 'look-ups' (p<.0001). In addition, ANOVA analyses also showed that text reading time differed significantly across all four treatments, and so did reading comprehension. Vocabulary acquisition was reported as enhanced in the three treatments rather than in the control group, but still with insignificant differences across the three treatments, yet with more differential effects in favour of the pop-up condition. Data also assert that participants preferred the pop-up e-dictionary more than the type-in and paper-based groups. Explanations of the findings vis-à-vis the cognitive load theory were presented. Pedagogical implications and suggestions for further research were forwarded at the end.Keywords: computer-mediated dictionaries, type-in dictionaries, pop-up dictionaries, reading comprehension, vocabulary acquisition
Procedia PDF Downloads 43516721 Saline Water Transgression into Fresh Coastal Groundwater in the Confined Aquifer of Lagos, Nigeria
Authors: Babatunde Adebo, Adedeji Adetoyinbo
Abstract:
Groundwater is an important constituent of the hydrological cycle and plays a vital role in augmenting water supply to meet the ever-increasing needs of people for domestic, agricultural and industrial purposes. Unfortunately, this important resource has in most cases been contaminated due to the advancement of seawater into the fresh groundwater. This is due to the high volume of water being abstracted in these areas as a result of a high population of coastal dwellers. The knowledge of salinity level and intrusion of saltwater into the freshwater aquifer is, therefore, necessary for groundwater monitoring and prediction in the coastal areas. In this work, an advection-dispersion saltwater intrusion model is used to study and simulate saltwater intrusion in a typical coastal aquifer. The aquifer portion was divided into a grid with elements and nodes. Map of the study area indicating well locations were overlain on the grid system such that these locations coincide with the nodes. Chlorides at these well were considered as initial nodal salinities. Results showed a highest and lowest increase in simulated chloride of 37.89 mg/L and 0.8 mg/L respectively. It also revealed that the chloride concentration of most of the considered well might climb unacceptable level in the next few years, if the current abstraction rate continues unabated.Keywords: saltwater intrusion, coastal aquifer, nodal salinity, chloride concentration
Procedia PDF Downloads 24016720 A Model for Diagnosis and Prediction of Coronavirus Using Neural Network
Authors: Sajjad Baghernezhad
Abstract:
Meta-heuristic and hybrid algorithms have high adeer in modeling medical problems. In this study, a neural network was used to predict covid-19 among high-risk and low-risk patients. This study was conducted to collect the applied method and its target population consisting of 550 high-risk and low-risk patients from the Kerman University of medical sciences medical center to predict the coronavirus. In this study, the memetic algorithm, which is a combination of a genetic algorithm and a local search algorithm, has been used to update the weights of the neural network and develop the accuracy of the neural network. The initial study showed that the accuracy of the neural network was 88%. After updating the weights, the memetic algorithm increased by 93%. For the proposed model, sensitivity, specificity, positive predictivity value, value/accuracy to 97.4, 92.3, 95.8, 96.2, and 0.918, respectively; for the genetic algorithm model, 87.05, 9.20 7, 89.45, 97.30 and 0.967 and for logistic regression model were 87.40, 95.20, 93.79, 0.87 and 0.916. Based on the findings of this study, neural network models have a lower error rate in the diagnosis of patients based on individual variables and vital signs compared to the regression model. The findings of this study can help planners and health care providers in signing programs and early diagnosis of COVID-19 or Corona.Keywords: COVID-19, decision support technique, neural network, genetic algorithm, memetic algorithm
Procedia PDF Downloads 6616719 Apps Reduce the Cost of Construction
Authors: Ali Mohammadi
Abstract:
Every construction that is done, the most important part of attention for employers and contractors is its cost, and they always try to reduce costs so that they can compete in the market, so they estimate the cost of construction before starting their activities. The costs can be generally divided into four parts: the materials used, the equipment used, the manpower required, and the time required. In this article, we are trying to talk about the three items of equipment, manpower, and time, and examine how the use of apps can reduce the cost of construction, while due to various reasons, it has received less attention in the field of app design. Also, because we intend to use these apps in construction and they are used by engineers and experts, we define these apps as engineering apps because the idea of their design must be by an engineer who works in that field. Also, considering that most engineers are familiar with programming during their studies, they can design the apps they need using simple programming software.Keywords: layout, as-bilt, monitoring, maps
Procedia PDF Downloads 6516718 An Automatic Generating Unified Modelling Language Use Case Diagram and Test Cases Based on Classification Tree Method
Authors: Wassana Naiyapo, Atichat Sangtong
Abstract:
The processes in software development by Object Oriented methodology have many stages those take time and high cost. The inconceivable error in system analysis process will affect to the design and the implementation process. The unexpected output causes the reason why we need to revise the previous process. The more rollback of each process takes more expense and delayed time. Therefore, the good test process from the early phase, the implemented software is efficient, reliable and also meet the user’s requirement. Unified Modelling Language (UML) is the tool which uses symbols to describe the work process in Object Oriented Analysis (OOA). This paper presents the approach for automatically generated UML use case diagram and test cases. UML use case diagram is generated from the event table and test cases are generated from use case specifications and Graphic User Interfaces (GUI). Test cases are derived from the Classification Tree Method (CTM) that classify data to a node present in the hierarchy structure. Moreover, this paper refers to the program that generates use case diagram and test cases. As the result, it can reduce work time and increase efficiency work.Keywords: classification tree method, test case, UML use case diagram, use case specification
Procedia PDF Downloads 16216717 Predictive Analytics in Traffic Flow Management: Integrating Temporal Dynamics and Traffic Characteristics to Estimate Travel Time
Authors: Maria Ezziani, Rabie Zine, Amine Amar, Ilhame Kissani
Abstract:
This paper introduces a predictive model for urban transportation engineering, which is vital for efficient traffic management. Utilizing comprehensive datasets and advanced statistical techniques, the model accurately forecasts travel times by considering temporal variations and traffic dynamics. Machine learning algorithms, including regression trees and neural networks, are employed to capture sequential dependencies. Results indicate significant improvements in predictive accuracy, particularly during peak hours and holidays, with the incorporation of traffic flow and speed variables. Future enhancements may integrate weather conditions and traffic incidents. The model's applications range from adaptive traffic management systems to route optimization algorithms, facilitating congestion reduction and enhancing journey reliability. Overall, this research extends beyond travel time estimation, offering insights into broader transportation planning and policy-making realms, empowering stakeholders to optimize infrastructure utilization and improve network efficiency.Keywords: predictive analytics, traffic flow, travel time estimation, urban transportation, machine learning, traffic management
Procedia PDF Downloads 8416716 Fast Aerodynamic Evaluation of Transport Aircraft in Early Phases
Authors: Xavier Bertrand, Alexandre Cayrel
Abstract:
The early phase of an aircraft development is instrumental as it really drives the potential of a new concept. Any weakness in the high-level design (wing planform, moveable surfaces layout etc.) will be extremely difficult and expensive to recover later in the aircraft development process. Aerodynamic evaluation in this very early development phase is driven by two main criteria: a short lead-time to allow quick iterations of the geometrical design, and a high quality of the calculations to get an accurate & reliable assessment of the current status. These two criteria are usually quite contradictory. Actually, short lead time of a couple of hours from end-to-end can be obtained with very simple tools (semi-empirical methods for instance) although their accuracy is limited, whereas higher quality calculations require heavier/more complex tools, which obviously need more complex inputs as well, and a significantly longer lead time. At this point, the choice has to be done between accuracy and lead-time. A brand new approach has been developed within Airbus, aiming at obtaining quickly high quality evaluations of the aerodynamic of an aircraft. This methodology is based on a joint use of Surrogate Modelling and a lifting line code. The Surrogate Modelling is used to get the wing sections characteristics (e.g. lift coefficient vs. angle of attack), whatever the airfoil geometry, the status of the moveable surfaces (aileron/spoilers) or the high-lift devices deployment. From these characteristics, the lifting line code is used to get the 3D effects on the wing whatever the flow conditions (low/high Mach numbers etc.). This methodology has been applied successfully to a concept of medium range aircraft.Keywords: aerodynamics, lifting line, surrogate model, CFD
Procedia PDF Downloads 35916715 Real-time Rate and Rhythms Feedback Control System in Patients with Atrial Fibrillation
Authors: Mohammad A. Obeidat, Ayman M. Mansour
Abstract:
Capturing the dynamic behavior of the heart to improve control performance, enhance robustness, and support diagnosis is very important in establishing real time models for the heart. Control Techniques and strategies have been utilized to improve system costs, reliability, and estimation accuracy for different types of systems such as biomedical, industrial, and other systems that required tuning input/output relation and/or monitoring. Simulations are performed to illustrate potential applications of the technology. In this research, a new control technology scheme is used to enhance the performance of the Af system and meet the design specifications.Keywords: atrial fibrillation, dynamic behavior, closed loop, signal, filter
Procedia PDF Downloads 42116714 Transient Heat Transfer: Experimental Investigation near the Critical Point
Authors: Andreas Kohlhepp, Gerrit Schatte, Wieland Christoph, Spliethoff Hartmut
Abstract:
In recent years the research of heat transfer phenomena of water and other working fluids near the critical point experiences a growing interest for power engineering applications. To match the highly volatile characteristics of renewable energies, conventional power plants need to shift towards flexible operation. This requires speeding up the load change dynamics of steam generators and their heating surfaces near the critical point. In dynamic load transients, both a high heat flux with an unfavorable ratio to the mass flux and a high difference in fluid and wall temperatures, may cause problems. It may lead to deteriorated heat transfer (at supercritical pressures), dry-out or departure from nucleate boiling (at subcritical pressures), all cases leading to an extensive rise of temperatures. For relevant technical applications, the heat transfer coefficients need to be predicted correctly in case of transient scenarios to prevent damage to the heated surfaces (membrane walls, tube bundles or fuel rods). In transient processes, the state of the art method of calculating the heat transfer coefficients is using a multitude of different steady-state correlations for the momentarily existing local parameters for each time step. This approach does not necessarily reflect the different cases that may lead to a significant variation of the heat transfer coefficients and shows gaps in the individual ranges of validity. An algorithm was implemented to calculate the transient behavior of steam generators during load changes. It is used to assess existing correlations for transient heat transfer calculations. It is also desirable to validate the calculation using experimental data. By the use of a new full-scale supercritical thermo-hydraulic test rig, experimental data is obtained to describe the transient phenomena under dynamic boundary conditions as mentioned above and to serve for validation of transient steam generator calculations. Aiming to improve correlations for the prediction of the onset of deteriorated heat transfer in both, stationary and transient cases the test rig was specially designed for this task. It is a closed loop design with a directly electrically heated evaporation tube, the total heating power of the evaporator tube and the preheater is 1MW. To allow a big range of parameters, including supercritical pressures, the maximum pressure rating is 380 bar. The measurements contain the most important extrinsic thermo-hydraulic parameters. Moreover, a high geometric resolution allows to accurately predict the local heat transfer coefficients and fluid enthalpies.Keywords: departure from nucleate boiling, deteriorated heat transfer, dryout, supercritical working fluid, transient operation of steam generators
Procedia PDF Downloads 22216713 Causes of Variation Orders in the Egyptian Construction Industry: Time and Cost Impacts
Authors: A. Samer Ezeldin, Jwanda M. El Sarag
Abstract:
Variation orders are of great importance in any construction project. Variation orders are defined as any change in the scope of works of a project that can be an addition omission, or even modification. This paper investigates the variation orders that occur during construction projects in Egypt. The literature review represents a comparison of causes of variation orders among Egypt, Tanzania, Nigeria, Malaysia and the United Kingdom. A classification of occurrence of variation orders due to owner related factors, consultant related factors and other factors are signified in the literature review. These classified events that lead to variation orders were introduced in a survey with 19 events to observe their frequency of occurrence, and their time and cost impacts. The survey data was obtained from 87 participants that included clients, consultants, and contractors and a database of 42 scenarios was created. A model is then developed to help assist project managers in predicting the frequency of variations and account for a budget for any additional costs and minimize any delays that can take place. Two experts with more than 25 years of experience were given the model to verify that the model was working effectively. The model was then validated on a residential compound that was completed in July 2016 to prove that the model actually produces acceptable results.Keywords: construction, cost impact, Egypt, time impact, variation orders
Procedia PDF Downloads 18316712 A Study of Fatigue Life Estimation of a Modular Unmanned Aerial Vehicle by Developing a Structural Health Monitoring System
Authors: Zain Ul Hassan, Muhammad Zain Ul Abadin, Muhammad Zubair Khan
Abstract:
Unmanned aerial vehicles (UAVs) have now become of predominant importance for various operations, and an immense amount of work is going on in this specific category. The structural stability and life of these UAVs is key factor that should be considered while deploying them to different intelligent operations as their failure leads to loss of sensitive real-time data and cost. This paper presents an applied research on the development of a structural health monitoring system for a UAV designed and fabricated by deploying modular approach. Firstly, a modular UAV has been designed which allows to dismantle and to reassemble the components of the UAV without effecting the whole assembly of UAV. This novel approach makes the vehicle very sustainable and decreases its maintenance cost to a significant value by making possible to replace only the part leading to failure. Then the SHM for the designed architecture of the UAV had been specified as a combination of wings integrated with strain gauges, on-board data logger, bridge circuitry and the ground station. For the research purpose sensors have only been attached to the wings being the most load bearing part and as per analysis was done on ANSYS. On the basis of analysis of the load time spectrum obtained by the data logger during flight, fatigue life of the respective component has been predicted using fracture mechanics techniques of Rain Flow Method and Miner’s Rule. Thus allowing us to monitor the health of a specified component time to time aiding to avoid any failure.Keywords: fracture mechanics, rain flow method, structural health monitoring system, unmanned aerial vehicle
Procedia PDF Downloads 29416711 Autonomous Landing of UAV on Moving Platform: A Mathematical Approach
Authors: Mortez Alijani, Anas Osman
Abstract:
Recently, the popularity of Unmanned aerial vehicles (UAVs) has skyrocketed amidst the unprecedented events and the global pandemic, as they play a key role in both the security and health sectors, through surveillance, taking test samples, transportation of crucial goods and spreading awareness among civilians. However, the process of designing and producing such aerial robots is suppressed by the internal and external constraints that pose serious challenges. Landing is one of the key operations during flight, especially, the autonomous landing of UAVs on a moving platform is a scientifically complex engineering problem. Typically having a successful automatic landing of UAV on a moving platform requires accurate localization of landing, fast trajectory planning, and robust control planning. To achieve these goals, the information about the autonomous landing process such as the intersection point, the position of platform/UAV and inclination angle are more necessary. In this study, the mathematical approach to this problem in the X-Y axis based on the inclination angle and position of UAV in the landing process have been presented. The experimental results depict the accurate position of the UAV, intersection between UAV and moving platform and inclination angle in the landing process, allowing prediction of the intersection point.Keywords: autonomous landing, inclination angle, unmanned aerial vehicles, moving platform, X-Y axis, intersection point
Procedia PDF Downloads 16416710 On the Homology Modeling, Structural Function Relationship and Binding Site Prediction of Human Alsin Protein
Authors: Y. Ruchi, A. Prerna, S. Deepshikha
Abstract:
Amyotrophic lateral sclerosis (ALS), also known as “Lou Gehrig’s disease”. It is a neurodegenerative disease associated with degeneration of motor neurons in the cerebral cortex, brain stem, and spinal cord characterized by distal muscle weakness, atrophy, normal sensation, pyramidal signs and progressive muscular paralysis reflecting. ALS2 is a juvenile autosomal recessive disorder, slowly progressive, that maps to chromosome 2q33 and is associated with mutations in the alsin gene, a putative GTPase regulator. In this paper we have done homology modeling of alsin2 protein using multiple templates (3KCI_A, 4LIM_A, 402W_A, 4D9S_A, and 4DNV_A) designed using the Prime program in Schrödinger software. Further modeled structure is used to identify effective binding sites on the basis of structural and physical properties using sitemap program in Schrödinger software, structural and function analysis is done by using Prosite and ExPASy server that gives insight into conserved domains and motifs that can be used for protein classification. This paper summarizes the structural, functional and binding site property of alsin2 protein. These binding sites can be potential drug target sites and can be used for docking studies.Keywords: ALS, binding site, homology modeling, neuronal degeneration
Procedia PDF Downloads 38916709 An Integrative Computational Pipeline for Detection of Tumor Epitopes in Cancer Patients
Authors: Tanushree Jaitly, Shailendra Gupta, Leila Taher, Gerold Schuler, Julio Vera
Abstract:
Genomics-based personalized medicine is a promising approach to fight aggressive tumors based on patient's specific tumor mutation and expression profiles. A remarkable case is, dendritic cell-based immunotherapy, in which tumor epitopes targeting patient's specific mutations are used to design a vaccine that helps in stimulating cytotoxic T cell mediated anticancer immunity. Here we present a computational pipeline for epitope-based personalized cancer vaccines using patient-specific haplotype and cancer mutation profiles. In the workflow proposed, we analyze Whole Exome Sequencing and RNA Sequencing patient data to detect patient-specific mutations and their expression level. Epitopes including the tumor mutations are computationally predicted using patient's haplotype and filtered based on their expression level, binding affinity, and immunogenicity. We calculate binding energy for each filtered major histocompatibility complex (MHC)-peptide complex using docking studies, and use this feature to select good epitope candidates further.Keywords: cancer immunotherapy, epitope prediction, NGS data, personalized medicine
Procedia PDF Downloads 25316708 Online Monitoring of Airborne Bioaerosols Released from a Composting, Green Waste Site
Authors: John Sodeau, David O'Connor, Shane Daly, Stig Hellebust
Abstract:
This study is the first to employ the online WIBS (Waveband Integrated Biosensor Sensor) technique for the monitoring of bioaerosol emissions and non-fluorescing “dust” released from a composting/green waste site. The purpose of the research was to provide a “proof of principle” for using WIBS to monitor such a location continually over days and nights in order to construct comparative “bioaerosol site profiles”. Current impaction/culturing methods take many days to achieve results available by the WIBS technique in seconds.The real-time data obtained was then used to assess variations of the bioaerosol counts as a function of size, “shape”, site location, working activity levels, time of day, relative humidity, wind speeds and wind directions. Three short campaigns were undertaken, one classified as a “light” workload period, another as a “heavy” workload period and finally a weekend when the site was closed. One main bioaerosol size regime was found to predominate: 0.5 micron to 3 micron with morphologies ranging from elongated to elipsoidal/spherical. The real-time number-concentration data were consistent with an Andersen sampling protocol that was employed at the site. The number-concentrations of fluorescent particles as a proportion of total particles counted amounted, on average, to ~1% for the “light” workday period, ~7% for the “heavy” workday period and ~18% for the weekend. The bioaerosol release profiles at the weekend were considerably different from those monitored during the working weekdays.Keywords: bioaerosols, composting, fluorescence, particle counting in real-time
Procedia PDF Downloads 35516707 Alterations in the Abundance of Ruminal Microbial Species during the Peripartal Period in Dairy Cows
Authors: S. Alqarni, J. C. McCann, A. Palladino, J. J. Loor
Abstract:
Seven fistulated Holstein cows were used from 3 weeks prepartum to 4 weeks postpartum to determine the relative abundance of 7 different species of ruminal microorganisms. The prepartum diet was based on corn silage. In the postpartum, diet included ground corn, grain by-products, and alfalfa haylage. Ruminal digesta were collected at five times: -14, -7, 10, 20, and 28 days around parturition. Total DNA from ruminal digesta was isolated and real-time quantitative PCR was used to determine the relative abundance of bacterial species. Eubacterium ruminantium and Selenomonas ruminantium were not affected by time (P>0.05). Megasphaera elsdenii and Prevotella bryantii increased significantly postpartum (P<0.001). Conversely, Butyrivibrio proteoclasticus decreased gradually from -14 through 28 days (P<0.001). Fibrobacter succinogenes was affected by time being lowest at day 10 (P=0.02) while Anaerovibrio lipolytica recorded the lowest abundance at -7 d followed by an increase by 20 days postpartum (P<0.001). Overall, these results indicate that changes in diet after parturition affect the abundance of ruminal bacteria, particularly M. elsdenii (a lactate-utilizing bacteria) and P. bryantii (a starch-degrading bacteria) which increased markedly after parturition likely as a consequence of a higher concentrate intake.Keywords: rumen bacteria, transition cows, rumen metabolism, peripartal period
Procedia PDF Downloads 56916706 Medical Image Augmentation Using Spatial Transformations for Convolutional Neural Network
Authors: Trupti Chavan, Ramachandra Guda, Kameshwar Rao
Abstract:
The lack of data is a pain problem in medical image analysis using a convolutional neural network (CNN). This work uses various spatial transformation techniques to address the medical image augmentation issue for knee detection and localization using an enhanced single shot detector (SSD) network. The spatial transforms like a negative, histogram equalization, power law, sharpening, averaging, gaussian blurring, etc. help to generate more samples, serve as pre-processing methods, and highlight the features of interest. The experimentation is done on the OpenKnee dataset which is a collection of knee images from the openly available online sources. The CNN called enhanced single shot detector (SSD) is utilized for the detection and localization of the knee joint from a given X-ray image. It is an enhanced version of the famous SSD network and is modified in such a way that it will reduce the number of prediction boxes at the output side. It consists of a classification network (VGGNET) and an auxiliary detection network. The performance is measured in mean average precision (mAP), and 99.96% mAP is achieved using the proposed enhanced SSD with spatial transformations. It is also seen that the localization boundary is comparatively more refined and closer to the ground truth in spatial augmentation and gives better detection and localization of knee joints.Keywords: data augmentation, enhanced SSD, knee detection and localization, medical image analysis, openKnee, Spatial transformations
Procedia PDF Downloads 15416705 An Architecture Framework for Design of Assembly Expert System
Authors: Chee Fai Tan, L. S. Wahidin, S. N. Khalil
Abstract:
Nowadays, manufacturing cost is one of the important factors that will affect the product cost as well as company profit. There are many methods that have been used to reduce the manufacturing cost in order for a company to stay competitive. One of the factors that effect manufacturing cost is the time. Expert system can be used as a method to reduce the manufacturing time. The purpose of the expert system is to diagnose and solve the problem of design of assembly. The paper describes an architecture framework for design of assembly expert system that focuses on commercial vehicle seat manufacturing industry.Keywords: design of assembly, expert system, vehicle seat, mechanical engineering
Procedia PDF Downloads 43816704 Reliability and Probability Weighted Moment Estimation for Three Parameter Mukherjee-Islam Failure Model
Authors: Ariful Islam, Showkat Ahmad Lone
Abstract:
The Mukherjee-Islam Model is commonly used as a simple life time distribution to assess system reliability. The model exhibits a better fit for failure information and provides more appropriate information about hazard rate and other reliability measures as shown by various authors. It is possible to introduce a location parameter at a time (i.e., a time before which failure cannot occur) which makes it a more useful failure distribution than the existing ones. Even after shifting the location of the distribution, it represents a decreasing, constant and increasing failure rate. It has been shown to represent the appropriate lower tail of the distribution of random variables having fixed lower bound. This study presents the reliability computations and probability weighted moment estimation of three parameter model. A comparative analysis is carried out between three parameters finite range model and some existing bathtub shaped curve fitting models. Since probability weighted moment method is used, the results obtained can also be applied on small sample cases. Maximum likelihood estimation method is also applied in this study.Keywords: comparative analysis, maximum likelihood estimation, Mukherjee-Islam failure model, probability weighted moment estimation, reliability
Procedia PDF Downloads 27416703 In vitro Effects of Berberine on the Vitality and Oxidative Profile of Bovine Spermatozoa
Authors: Eva Tvrdá, Hana Greifová, Peter Ivanič, Norbert Lukáč
Abstract:
The aim of this study was to evaluate the dose- and time-dependent in vitro effects of berberine (BER), a natural alkaloid with numerous biological properties on bovine spermatozoa during three time periods (0 h, 2 h, 24 h). Bovine semen samples were diluted and cultivated in physiological saline solution containing 0.5% DMSO together with 200, 100, 50, 10, 5, and 1 μmol/L BER. Spermatozoa motility was assessed using the computer assisted semen analyzer. The viability of spermatozoa was assessed by the metabolic (MTT) assay, production of superoxide radicals was quantified using the nitroblue tetrazolium (NBT) test, and chemiluminescence was used to evaluate the generation of reactive oxygen species (ROS). Cell lysates were prepared and the extent of lipid peroxidation (LPO) was evaluated using the TBARS assay. The results of the movement activity showed a significant increase in the motility during long term cultivation in case of concentrations ranging between 1 and 10 μmol/L BER (P < 0.01; P < 0.001; 24 h). At the same time, supplementation of 1, 5 and 10 μmol/L BER led to a significant preservation of the cell viability (P < 0.001; 24 h). BER addition at a range of 1-50 μmol/L also provided a significantly higher protection against superoxide (P < 0.05) and ROS (P < 0.001; P < 0.01) overgeneration as well as LPO (P < 0.01; P<0.05) after a 24 h cultivation. We may suggest that supplementation of BER to bovine spermatozoa, particularly at concentrations ranging between 1 and 50 μmol/L, may offer protection to the motility, viability and oxidative status of the spermatozoa, particularly notable at 24 h.Keywords: berberine, bulls, motility, oxidative profile, spermatozoa, viability
Procedia PDF Downloads 13016702 Scheduling Jobs with Stochastic Processing Times or Due Dates on a Server to Minimize the Number of Tardy Jobs
Authors: H. M. Soroush
Abstract:
The problem of scheduling products and services for on-time deliveries is of paramount importance in today’s competitive environments. It arises in many manufacturing and service organizations where it is desirable to complete jobs (products or services) with different weights (penalties) on or before their due dates. In such environments, schedules should frequently decide whether to schedule a job based on its processing time, due-date, and the penalty for tardy delivery to improve the system performance. For example, it is common to measure the weighted number of late jobs or the percentage of on-time shipments to evaluate the performance of a semiconductor production facility or an automobile assembly line. In this paper, we address the problem of scheduling a set of jobs on a server where processing times or due-dates of jobs are random variables and fixed weights (penalties) are imposed on the jobs’ late deliveries. The goal is to find the schedule that minimizes the expected weighted number of tardy jobs. The problem is NP-hard to solve; however, we explore three scenarios of the problem wherein: (i) both processing times and due-dates are stochastic; (ii) processing times are stochastic and due-dates are deterministic; and (iii) processing times are deterministic and due-dates are stochastic. We prove that special cases of these scenarios are solvable optimally in polynomial time, and introduce efficient heuristic methods for the general cases. Our computational results show that the heuristics perform well in yielding either optimal or near optimal sequences. The results also demonstrate that the stochasticity of processing times or due-dates can affect scheduling decisions. Moreover, the proposed problem is general in the sense that its special cases reduce to some new and some classical stochastic single machine models.Keywords: number of late jobs, scheduling, single server, stochastic
Procedia PDF Downloads 49716701 Effect of Distance Education Students Motivation with the Turkish Language and Literature Course
Authors: Meva Apaydin, Fatih Apaydin
Abstract:
Role of education in the development of society is great. Teaching and training started with the beginning of the history and different methods and techniques which have been applied as the time passed and changed everything with the aim of raising the level of learning. In addition to the traditional teaching methods, technology has been used in recent years. With the beginning of the use of internet in education, some problems which could not be soluted till that time has been dealt and it is inferred that it is possible to educate the learners by using contemporary methods as well as traditional methods. As an advantage of technological developments, distance education is a system which paves the way for the students to be educated individually wherever and whenever they like without the needs of physical school environment. Distance education has become prevalent because of the physical inadequacies in education institutions, as a result; disadvantageous circumstances such as social complexities, individual differences and especially geographical distance disappear. What’s more, the high-speed of the feedbacks between teachers and learners, improvement in student motivation because there is no limitation of time, low-cost, the objective measuring and evaluation are on foreground. In spite of the fact that there is teaching beneficences in distance education, there are also limitations. Some of the most important problems are that : Some problems which are highly possible to come across may not be solved in time, lack of eye-contact between the teacher and the learner, so trust-worthy feedback cannot be got or the problems stemming from the inadequate technological background are merely some of them. Courses are conducted via distance education in many departments of the universities in our country. In recent years, giving lectures such as Turkish Language, English, and History in the first grades of the academic departments in the universities is an application which is constantly becoming prevalent. In this study, the application of Turkish Language course via distance education system by analyzing advantages and disadvantages of the distance education system which is based on internet.Keywords: distance education, Turkish language, motivation, benefits
Procedia PDF Downloads 43616700 Data-Driven Approach to Predict Inpatient's Estimated Discharge Date
Authors: Ayliana Dharmawan, Heng Yong Sheng, Zhang Xiaojin, Tan Thai Lian
Abstract:
To facilitate discharge planning, doctors are presently required to assign an Estimated Discharge Date (EDD) for each patient admitted to the hospital. This assignment of the EDD is largely based on the doctor’s judgment. This can be difficult for cases which are complex or relatively new to the doctor. It is hypothesized that a data-driven approach would be able to facilitate the doctors to make accurate estimations of the discharge date. Making use of routinely collected data on inpatient discharges between January 2013 and May 2016, a predictive model was developed using machine learning techniques to predict the Length of Stay (and hence the EDD) of inpatients, at the point of admission. The predictive performance of the model was compared to that of the clinicians using accuracy measures. Overall, the best performing model was found to be able to predict EDD with an accuracy improvement in Average Squared Error (ASE) by -38% as compared to the first EDD determined by the present method. It was found that important predictors of the EDD include the provisional diagnosis code, patient’s age, attending doctor at admission, medical specialty at admission, accommodation type, and the mean length of stay of the patient in the past year. The predictive model can be used as a tool to accurately predict the EDD.Keywords: inpatient, estimated discharge date, EDD, prediction, data-driven
Procedia PDF Downloads 17416699 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic
Authors: Fei Gao, Rodolfo C. Raga Jr.
Abstract:
This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle
Procedia PDF Downloads 7516698 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 28116697 An A-Star Approach for the Quickest Path Problem with Time Windows
Authors: Christofas Stergianos, Jason Atkin, Herve Morvan
Abstract:
As air traffic increases, more airports are interested in utilizing optimization methods. Many processes happen in parallel at an airport, and complex models are needed in order to have a reliable solution that can be implemented for ground movement operations. The ground movement for aircraft in an airport, allocating a path to each aircraft to follow in order to reach their destination (e.g. runway or gate), is one process that could be optimized. The Quickest Path Problem with Time Windows (QPPTW) algorithm has been developed to provide a conflict-free routing of vehicles and has been applied to routing aircraft around an airport. It was subsequently modified to increase the accuracy for airport applications. These modifications take into consideration specific characteristics of the problem, such as: the pushback process, which considers the extra time that is needed for pushing back an aircraft and turning its engines on; stand holding where any waiting should be allocated to the stand; and runway sequencing, where the sequence of the aircraft that take off is optimized and has to be respected. QPPTW involves searching for the quickest path by expanding the search in all directions, similarly to Dijkstra’s algorithm. Finding a way to direct the expansion can potentially assist the search and achieve a better performance. We have further modified the QPPTW algorithm to use a heuristic approach in order to guide the search. This new algorithm is based on the A-star search method but estimates the remaining time (instead of distance) in order to assess how far the target is. It is important to consider the remaining time that it is needed to reach the target, so that delays that are caused by other aircraft can be part of the optimization method. All of the other characteristics are still considered and time windows are still used in order to route multiple aircraft rather than a single aircraft. In this way the quickest path is found for each aircraft while taking into account the movements of the previously routed aircraft. After running experiments using a week of real aircraft data from Zurich Airport, the new algorithm (A-star QPPTW) was found to route aircraft much more quickly, being especially fast in routing the departing aircraft where pushback delays are significant. On average A-star QPPTW could route a full day (755 to 837 aircraft movements) 56% faster than the original algorithm. In total the routing of a full week of aircraft took only 12 seconds with the new algorithm, 15 seconds faster than the original algorithm. For real time application, the algorithm needs to be very fast, and this speed increase will allow us to add additional features and complexity, allowing further integration with other processes in airports and leading to more optimized and environmentally friendly airports.Keywords: a-star search, airport operations, ground movement optimization, routing and scheduling
Procedia PDF Downloads 23116696 Hohmann Transfer and Bi-Elliptic Hohmann Transfer in TRAPPIST-1 System
Authors: Jorge L. Nisperuza, Wilson Sandoval, Edward. A. Gil, Johan A. Jimenez
Abstract:
In orbital mechanics, an active research topic is the calculation of interplanetary trajectories efficient in terms of energy and time. In this sense, this work concerns the calculation of the orbital elements for sending interplanetary probes in the extrasolar system TRAPPIST-1. Specifically, using the mathematical expressions of the circular and elliptical trajectory parameters, expressions for the flight time and the orbital transfer rate increase between orbits, the orbital parameters and the graphs of the trajectories of Hohmann and Hohmann bi-elliptic for sending a probe from the innermost planet to all the other planets of the studied system, are obtained. The relationship between the orbital transfer rate increments and the relationship between the flight times for the two transfer types is found. The results show that, for all cases under consideration, the Hohmann transfer results to be the least energy and temporary cost, a result according to the theory associated with Hohmann and Hohmann bi-elliptic transfers. Saving in the increase of the speed reaches up to 87% was found, and it happens for the transference between the two innermost planets, whereas the time of flight increases by a factor of up to 6.6 if one makes use of the bi-elliptic transfer, this for the case of sending a probe from the innermost planet to the outermost.Keywords: bi-elliptic Hohmann transfer, exoplanet, extrasolar system, Hohmann transfer, TRAPPIST-1
Procedia PDF Downloads 192