Search results for: exponential time differencing method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 32195

Search results for: exponential time differencing method

31265 Model Based Design of Fly-by-Wire Flight Controls System of a Fighter Aircraft

Authors: Nauman Idrees

Abstract:

Modeling and simulation during the conceptual design phase are the most effective means of system testing resulting in time and cost savings as compared to the testing of hardware prototypes, which are mostly not available during the conceptual design phase. This paper uses the model-based design (MBD) method in designing the fly-by-wire flight controls system of a fighter aircraft using Simulink. The process begins with system definition and layout where modeling requirements and system components were identified, followed by hierarchical system layout to identify the sequence of operation and interfaces of system with external environment as well as the internal interface between the components. In the second step, each component within the system architecture was modeled along with its physical and functional behavior. Finally, all modeled components were combined to form the fly-by-wire flight controls system of a fighter aircraft as per system architecture developed. The system model developed using this method can be simulated using any simulation software to ensure that desired requirements are met even without the development of a physical prototype resulting in time and cost savings.

Keywords: fly-by-wire, flight controls system, model based design, Simulink

Procedia PDF Downloads 107
31264 Using Gaussian Process in Wind Power Forecasting

Authors: Hacene Benkhoula, Mohamed Badreddine Benabdella, Hamid Bouzeboudja, Abderrahmane Asraoui

Abstract:

The wind is a random variable difficult to master, for this, we developed a mathematical and statistical methods enable to modeling and forecast wind power. Gaussian Processes (GP) is one of the most widely used families of stochastic processes for modeling dependent data observed over time, or space or time and space. GP is an underlying process formed by unrecognized operator’s uses to solve a problem. The purpose of this paper is to present how to forecast wind power by using the GP. The Gaussian process method for forecasting are presented. To validate the presented approach, a simulation under the MATLAB environment has been given.

Keywords: wind power, Gaussien process, modelling, forecasting

Procedia PDF Downloads 396
31263 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation

Authors: Kausar Harun, Ahmad Azmin Mohamad

Abstract:

Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.

Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles

Procedia PDF Downloads 300
31262 Reliability Analysis of Heat Exchanger Cycle Using Non-Parametric Method

Authors: Apurv Kulkarni, Shreyas Badave, B. Rajiv

Abstract:

Non-parametric reliability technique is useful for assessment of reliability of systems for which failure rates are not available. This is useful when detection of malfunctioning of any component is the key purpose during ongoing operation of the system. The main purpose of the Heat Exchanger Cycle discussed in this paper is to provide hot water at a constant temperature for longer periods of time. In such a cycle, certain components play a crucial role and this paper presents an effective way to predict the malfunctioning of the components by determination of system reliability. The method discussed in the paper is feasible and this is clarified with the help of various test cases.

Keywords: heat exchanger cycle, k-statistics, PID controller, system reliability

Procedia PDF Downloads 374
31261 Modeling User Departure Time Choice for Trips in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Modeling users’ decisions on departure time choice is the main motivation for this research. In particular, it examines the impact of social-demographic features, household, job characteristics and trip qualities on individuals’ departure time choice. Departure time alternatives are presented as adjacent discrete time periods. The choice between these alternatives is done using a discrete choice model. Since a great deal of early morning trips and traffic congestion at that time of the day comprise work trips, the focus of this study is on the work trip over the entire day. Therefore, this study by using questionnaire of stated preference models users’ departure time choice affected by congestion pricing plan in downtown Tehran. Experimental results demonstrate efficient social-demographic impact on work trips’ departure time. These findings have substantial outcomes for the analysis of transportation planning. Particularly, the analysis shows that ignoring the effects of these variables could result in erroneous information and consequently decisions in the field of transportation planning and air quality would fail and cause financial resources loss.

Keywords: modeling, departure time, travel timing, time of the day, congestion pricing, transportation planning

Procedia PDF Downloads 423
31260 Subjective Time as a Marker of the Present Consciousness

Authors: Anastasiya Paltarzhitskaya

Abstract:

Subjective time plays an important role in consciousness processes and self-awareness at the moment. The concept of intrinsic neural timescales (INT) explains the difference in perceiving various time intervals. The capacity to experience the present builds on the fundamental properties of temporal cognition. The challenge that both philosophy and neuroscience try to answer is how the brain differentiates the present from the past and future. In our work, we analyze papers which describe mechanisms involved in the perception of ‘present’ and ‘non-present’, i.e., future and past moments. Taking into account that we perceive time intervals even during rest or relaxation, we suppose that the default-mode network activity can code time features, including the present moment. We can compare some results of time perceptual studies, where brain activity was shown in states with different flows of time, including resting states and during “mental time travel”. According to the concept of mental traveling, we employ a range of scenarios which demand episodic memory. However, some papers show that the hippocampal region does not activate during time traveling. It is a controversial result that is further complicated by the phenomenological aspect that includes a holistic set of information about the individual’s past and future.

Keywords: temporal consciousness, time perception, memory, present

Procedia PDF Downloads 61
31259 A Superposition Method in Analyses of Clamped Thick Plates

Authors: Alexander Matrosov, Guriy Shirunov

Abstract:

A superposition method based on Lame's idea is used to get a general analytical solution to analyze a stress and strain state of a rectangular isotropjc elastic thick plate. The solution is built by using three solutions of the method of initial functions in the form of double trigonometric series. The results of bending of a thick plate under normal stress on its top face with two opposite sides clamped while others free of load are presented and compared with FEM modelling.

Keywords: general solution, method of initial functions, superposition method, thick isotropic plates

Procedia PDF Downloads 582
31258 Effect of Masonry Infill in R.C. Framed Buildings

Authors: Pallab Das, Nabam Zomleen

Abstract:

Effective dissipation of lateral loads that are coming due to seismic force determines the strength, durability and safety concern of the structure. Masonry infill has high stiffness and strength capabilities which can be put into an effective utilization for lateral load dissipation by incorporating it into building construction, but masonry behaves in highly nonlinear manner, so it is highly important to find out generalized, yet a rational approach to determine its nonlinear behavior and failure mode and it’s response when it is incorporated into building. But most of the countries do not specify the procedure for design of masonry infill wall. Whereas, there are many analytical modeling method available in literature, e.g. equivalent diagonal strut method, finite element modeling etc. In this paper the masonry infill is modeled and 6-storey bare framed building and building with masonry infill is analyzed using SAP-200014 in order to find out inter-storey drift by time-history analysis and capacity curve by Pushover analysis. The analysis shows that, while, the structure is well within CP performance level for both the case, whereas, there is considerable reduction of inter-storey drift of about 28%, when the building is analyzed with masonry infill wall.

Keywords: capacity curve, masonry infill, nonlinear analysis, time history analysis

Procedia PDF Downloads 368
31257 DNA Damage and Apoptosis Induced in Drosophila melanogaster Exposed to Different Duration of 2400 MHz Radio Frequency-Electromagnetic Fields Radiation

Authors: Neha Singh, Anuj Ranjan, Tanu Jindal

Abstract:

Over the last decade, the exponential growth of mobile communication has been accompanied by a parallel increase in density of electromagnetic fields (EMF). The continued expansion of mobile phone usage raises important questions as EMF, especially radio frequency (RF), have long been suspected of having biological effects. In the present experiments, we studied the effects of RF-EMF on cell death (apoptosis) and DNA damage of a well- tested biological model, Drosophila melanogaster exposed to 2400 MHz frequency for different time duration i.e. 2 hrs, 4 hrs, 6 hrs,8 hrs, 10 hrs, and 12 hrs each day for five continuous days in ambient temperature and humidity conditions inside an exposure chamber. The flies were grouped into control, sham-exposed, and exposed with 100 flies in each group. In this study, well-known techniques like Comet Assay and TUNEL (Terminal deoxynucleotide transferase dUTP Nick End Labeling) Assay were used to detect DNA damage and for apoptosis studies, respectively. Experiments results showed DNA damage in the brain cells of Drosophila which increases as the duration of exposure increases when observed under the observed when we compared results of control, sham-exposed, and exposed group which indicates that EMF radiation-induced stress in the organism that leads to DNA damage and cell death. The process of apoptosis and mutation follows similar pathway for all eukaryotic cells; therefore, studying apoptosis and genotoxicity in Drosophila makes similar relevance for human beings as well.

Keywords: cell death, apoptosis, Comet Assay, DNA damage, Drosophila, electromagnetic fields, EMF, radio frequency, RF, TUNEL assay

Procedia PDF Downloads 151
31256 Solution of Hybrid Fuzzy Differential Equations

Authors: Mahmood Otadi, Maryam Mosleh

Abstract:

The hybrid differential equations have a wide range of applications in science and engineering. In this paper, the homotopy analysis method (HAM) is applied to obtain the series solution of the hybrid differential equations. Using the homotopy analysis method, it is possible to find the exact solution or an approximate solution of the problem. Comparisons are made between improved predictor-corrector method, homotopy analysis method and the exact solution. Finally, we illustrate our approach by some numerical example.

Keywords: fuzzy number, fuzzy ODE, HAM, approximate method

Procedia PDF Downloads 499
31255 A Validated High-Performance Liquid Chromatography-UV Method for Determination of Malondialdehyde-Application to Study in Chronic Ciprofloxacin Treated Rats

Authors: Anil P. Dewani, Ravindra L. Bakal, Anil V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV detection for the determination of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC-UV method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by UV detection at 278 nm. The chromatographic conditions were optimized by varying the concentration and pH followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% Triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20 % v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. The method was linear for MDA spiked in plasma and subjected to derivatization at concentrations ranging from 100 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of ciprofloxacin (CFL) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was < 0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of CFL of 21 days.

Keywords: MDA, TBA, ciprofloxacin, HPLC-UV

Procedia PDF Downloads 314
31254 Optimal Control of Volterra Integro-Differential Systems Based on Legendre Wavelets and Collocation Method

Authors: Khosrow Maleknejad, Asyieh Ebrahimzadeh

Abstract:

In this paper, the numerical solution of optimal control problem (OCP) for systems governed by Volterra integro-differential (VID) equation is considered. The method is developed by means of the Legendre wavelet approximation and collocation method. The properties of Legendre wavelet accompany with Gaussian integration method are utilized to reduce the problem to the solution of nonlinear programming one. Some numerical examples are given to confirm the accuracy and ease of implementation of the method.

Keywords: collocation method, Legendre wavelet, optimal control, Volterra integro-differential equation

Procedia PDF Downloads 377
31253 Learning Dynamic Representations of Nodes in Temporally Variant Graphs

Authors: Sandra Mitrovic, Gaurav Singh

Abstract:

In many industries, including telecommunications, churn prediction has been a topic of active research. A lot of attention has been drawn on devising the most informative features, and this area of research has gained even more focus with spread of (social) network analytics. The call detail records (CDRs) have been used to construct customer networks and extract potentially useful features. However, to the best of our knowledge, no studies including network features have yet proposed a generic way of representing network information. Instead, ad-hoc and dataset dependent solutions have been suggested. In this work, we build upon a recently presented method (node2vec) to obtain representations for nodes in observed network. The proposed approach is generic and applicable to any network and domain. Unlike node2vec, which assumes a static network, we consider a dynamic and time-evolving network. To account for this, we propose an approach that constructs the feature representation of each node by generating its node2vec representations at different timestamps, concatenating them and finally compressing using an auto-encoder-like method in order to retain reasonably long and informative feature vectors. We test the proposed method on churn prediction task in telco domain. To predict churners at timestamp ts+1, we construct training and testing datasets consisting of feature vectors from time intervals [t1, ts-1] and [t2, ts] respectively, and use traditional supervised classification models like SVM and Logistic Regression. Observed results show the effectiveness of proposed approach as compared to ad-hoc feature selection based approaches and static node2vec.

Keywords: churn prediction, dynamic networks, node2vec, auto-encoders

Procedia PDF Downloads 303
31252 The Optimum Operating Conditions for the Synthesis of Zeolite from Waste Incineration Fly Ash by Alkali Fusion and Hydrothermal Methods

Authors: Yi-Jie Lin, Jyh-Cherng Chen

Abstract:

The fly ash of waste incineration processes is usually hazardous and the disposal or reuse of waste incineration fly ash is difficult. In this study, the waste incineration fly ash was converted to useful zeolites by the alkali fusion and hydrothermal synthesis method. The influence of different operating conditions (the ratio of Si/Al, the ratio of hydrolysis liquid to solid, and hydrothermal time) was investigated to seek the optimum operating conditions for the synthesis of zeolite from waste incineration fly ash. The results showed that concentrations of heavy metals in the leachate of Toxicity Characteristic Leaching Procedure (TCLP) were all lower than the regulatory limits except lead. The optimum operating conditions for the synthesis of zeolite from waste incineration fly ash by the alkali fusion and hydrothermal synthesis method were Si/Al=40, NaOH/ash=1.5, alkali fusion at 400 oC for 40 min, hydrolysis with Liquid to Solid ratio (L/S)= 200 at 105 oC for 24 h, and hydrothermal synthesis at 105 oC for 24 h. The specific surface area of fly ash could be significantly increased from 8.59 m2/g to 651.51 m2/g (synthesized zeolite). The influence of different operating conditions on the synthesis of zeolite from waste incineration fly ash followed the sequence of Si/Al ratio > hydrothermal time > hydrolysis L/S ratio. The synthesized zeolites can be reused as good adsorbents to control the air or wastewater pollutants. The purpose of fly ash detoxification, reduction and waste recycling/reuse is achieved successfully.

Keywords: alkali fusion, hydrothermal, fly ash, zeolite

Procedia PDF Downloads 226
31251 Unsteady 3D Post-Stall Aerodynamics Accounting for Effective Loss in Camber Due to Flow Separation

Authors: Aritras Roy, Rinku Mukherjee

Abstract:

The current study couples a quasi-steady Vortex Lattice Method and a camber correcting technique, ‘Decambering’ for unsteady post-stall flow prediction. The wake is force-free and discrete such that the wake lattices move with the free-stream once shed from the wing. It is observed that the time-averaged unsteady coefficient of lift sees a relative drop at post-stall angles of attack in comparison to its steady counterpart for some angles of attack. Multiple solutions occur at post-stall and three different algorithms to choose solutions in these regimes show both unsteadiness and non-convergence of the iterations. The distribution of coefficient of lift on the wing span also shows sawtooth. Distribution of vorticity changes both along span and in the direction of the free-stream as the wake develops over time with distinct roll-up, which increases with time.

Keywords: post-stall, unsteady, wing, aerodynamics

Procedia PDF Downloads 358
31250 Cars in a Neighborhood: A Case of Sustainable Living in Sector 22 Chandigarh

Authors: Maninder Singh

Abstract:

The Chandigarh city is under the strain of exponential growth of car density across various neighborhood. The consumerist nature of society today is to be blamed for this menace because everyone wants to own and ride a car. Car manufacturers are busy selling two or more cars per household. The Regional Transport Offices are busy issuing as many licenses to new vehicles as they can in order to generate revenue in the form of Road Tax. The car traffic in the neighborhoods of Chandigarh has reached a tipping point. There needs to be a more empirical and sustainable model of cars per household, which should be based on specific parameters of livable neighborhoods. Sector 22 in Chandigarh is one of the first residential sectors to be established in the city. There is scope to think, reflect, and work out a method to know how many cars we need to sell our citizens before we lose the argument to traffic problems, parking problems, and road rage. This is where the true challenge of a planner or a designer of the city lies. Currently, in Chandigarh city, there are no clear visible answers to this problem. The way forward is to look at spatial mapping, planning, and design of car parking units to address the problem, rather than suggesting extreme measures of banning cars (short-term) or promoting plans for citywide transport (very long-term). This is a chance to resolve the problem with a pragmatic approach from a citizen’s perspective, instead of an orthodox development planner’s methodology. Since citizens are at the center of how the problem is to be addressed, acceptable solutions are more likely to emerge from the car and traffic problem as defined by the citizens. Thus, the idea and its implementation would be interesting in comparison to the known academic methodologies. The novel and innovative process would lead to a more acceptable and sustainable approach to the issue of number of car parks in the neighborhood of Chandigarh city.

Keywords: cars, Chandigarh, neighborhood, sustainable living, walkability

Procedia PDF Downloads 142
31249 Magnitude of Infection and Associated factor in Open Tibial Fractures Treated Operatively at Addis Ababa Burn Emergency and Trauma Center April, 2023

Authors: Tuji Mohammed Sani

Abstract:

Back ground: An open tibial fracture is an injury where the fractured bone directly communicates with the outside environment. Due to the specific anatomical features of the tibia (limited soft tissue coverage), more than quarter of its fractures are classified as open, representing the most common open long-bone injuries. Open tibial fractures frequently cause significant bone comminution, periosteal stripping, soft tissue loss, contamination and are prone to bacterial entry with biofilm formation, which increases the risk of deep bone infection. Objective: The main objective of the study was to determine Prevalence of infection and its associated factors in surgically treated open tibial fracture in Addis Ababa Burn Emergency and Trauma (AaBET) center. Method: A facility based retrospective cross-sectional study was conducted among patient treated for open tibial fracture at AaBET center from September 2018 to September 2021. The data was collected from patient’s chart using structured data collection form, and Data was entered and analyzed using SPSS version 26. Bivariable and multiple binary logistic regression were fitted. Multicollinearity was checked among candidate variables using variance inflation factor and tolerance, which were less than 5 and greater than 0.2, respectively. Model adequacy were tested using Hosmer-Lemeshow goodness of fitness test (P=0.711). AOR at 95% CI was reported, and P-value < 0.05 was considered statistically significant. Result: This study found that 33.9% of the study participants had an infection. Initial IV antibiotic time (AOR=2.924, 95% CI:1.160- 7.370) and time of wound closure from injury (AOR=3.524, 95% CI: 1.798-6.908), injury to admission time (AOR=2.895, 95% CI: 1.402 – 5.977). and definitive fixation method (AOR=0.244, 95% CI: 0.113 – 0.4508) were the factors found to have a statistically significant association with the occurrence of infection. Conclusion: The rate of infection in open tibial fractures indicates that there is a need to improve the management of open tibial fracture treated at AaBET center. Time from injury to admission, time from injury to first debridement, wound closure time, and initial Intra Venous antibiotic time from the injury are an important factor that can be readily amended to improve the infection rate. Whether wound closed before seven days or not were more important factor associated with occurrences of infection.

Keywords: infection, open tibia, fracture, magnitude

Procedia PDF Downloads 67
31248 In vitro Cytotoxicity Study on Silver Powders Synthesized via Different Routes

Authors: Otilia Ruxandra Vasile, Ecaterina Andronescu, Cristina Daniela Ghitulica, Bogdan Stefan Vasile, Roxana Trusca, Eugeniu Vasile, Alina Maria Holban, Carmen Mariana Chifiriuc, Florin Iordache, Horia Maniu

Abstract:

Engineered powders offer great promise in several applications, but little information is known about cytotoxicity effects. The aim of the current study was the synthesis and cytotoxicity examination of silver powders using pyrosol method at temperatures of 600°C, 650°C and 700°C, respectively sol-gel method and calcinations at 500°C, 600°C, 700°C and 800°C. We have chosen to synthesize and examine silver particles cytotoxicity due to its use in biological applications. The synthesized Ag powders were characterized from the structural, compositional and morphological point of view by using XRD, SEM, and TEM with SAED. In order to determine the influence of the synthesis route on Ag particles cytotoxicity, different sizes of micro and nanosilver synthesized powders were evaluated for their potential toxicity. For the study of their cytotoxicity, cell cycle and apoptosis have been done analysis through flow cytometry on human colon carcinoma cells and mesenchymal stem cells and through the MTT assay, while the viability and the morphological changes of the cells have been evaluated by using cloning studies. The results showed that the synthesized silver nanoparticles have displayed significant cytotoxicity effects on cell cultures. Our synthesized silver powders were found to present toxicity in a synthesis route and time-dependent manners for pyrosol synthesized nanoparticles; whereas a lower cytotoxicity has been measured after cells were treated with silver nanoparticles synthesized through sol-gel method.

Keywords: Ag, cytotoxicity, pyrosol method, sol-gel method

Procedia PDF Downloads 580
31247 Issues of Time's Urgency and Ritual in Children's Picture Books: A Closer Look at the Contributions of Grandparents

Authors: Karen Armstrong

Abstract:

Although invisible and fleeting, time is an essential variable in perception. Ritual is proposed as an antithesis to the passage of time, a way of linking our narratives with the past, present and future. This qualitative exploration examines a variety of award winning twentieth-century children’s picture books, specifically regarding the issues of time’s urgency and ritual with respect to children and grandparents. The paper will begin with a consideration of issues of time from the area of psychology, with regard to age, specifically contrasting later age and childhood. Next the value of ritual as represented by the presence of grandparents in children’s books. Specific instances of the contributions of grandparents or older adults with regard to this balancing function between time’s urgency and ritual will be discussed. Recommendations for future research include a consideration of grandparents’ or older characters’ depictions in books for older children.

Keywords: children's picture books, grandparents, ritual, time

Procedia PDF Downloads 293
31246 Global Optimization: The Alienor Method Mixed with Piyavskii-Shubert Technique

Authors: Guettal Djaouida, Ziadi Abdelkader

Abstract:

In this paper, we study a coupling of the Alienor method with the algorithm of Piyavskii-Shubert. The classical multidimensional global optimization methods involves great difficulties for their implementation to high dimensions. The Alienor method allows to transform a multivariable function into a function of a single variable for which it is possible to use efficient and rapid method for calculating the the global optimum. This simplification is based on the using of a reducing transformation called Alienor.

Keywords: global optimization, reducing transformation, α-dense curves, Alienor method, Piyavskii-Shubert algorithm

Procedia PDF Downloads 490
31245 Bioinformatics High Performance Computation and Big Data

Authors: Javed Mohammed

Abstract:

Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.

Keywords: high performance, big data, parallel computation, molecular data, computational biology

Procedia PDF Downloads 356
31244 Ultra-Fast pH-Gradient Ion Exchange Chromatography for the Separation of Monoclonal Antibody Charge Variants

Authors: Robert van Ling, Alexander Schwahn, Shanhua Lin, Ken Cook, Frank Steiner, Rowan Moore, Mauro de Pra

Abstract:

Purpose: Demonstration of fast high resolution charge variant analysis for monoclonal antibody (mAb) therapeutics within 5 minutes. Methods: Three commercially available mAbs were used for all experiments. The charge variants of therapeutic mAbs (Bevacizumab, Cetuximab, Infliximab, and Trastuzumab) are analyzed on a strong cation exchange column with a linear pH gradient separation method. The linear gradient from pH 5.6 to pH 10.2 is generated over time by running a linear pump gradient from 100% Thermo Scientific™ CX-1 pH Gradient Buffer A (pH 5.6) to 100% CX-1 pH Gradient Buffer B (pH 10.2), using the Thermo Scientific™ Vanquish™ UHPLC system. Results: The pH gradient method is generally applicable to monoclonal antibody charge variant analysis. In conjunction with state-of-the-art column and UHPLC technology, ultra fast high-resolution separations are consistently achieved in under 5 minutes for all mAbs analyzed. Conclusion: The linear pH gradient method is a platform method for mAb charge variant analysis. The linear pH gradient method can be easily optimized to improve separations and shorten cycle times. Ultra-fast charge variant separation is facilitated with UHPLC that complements, and in some instances outperforms CE approaches in terms of both resolution and throughput.

Keywords: charge variants, ion exchange chromatography, monoclonal antibody, UHPLC

Procedia PDF Downloads 430
31243 Simulating the Dynamics of E-waste Production from Mobile Phone: Model Development and Case Study of Rwanda

Authors: Rutebuka Evariste, Zhang Lixiao

Abstract:

Mobile phone sales and stocks showed an exponential growth in the past years globally and the number of mobile phones produced each year was surpassing one billion in 2007, this soaring growth of related e-waste deserves sufficient attentions paid to it regionally and globally as long as 40% of its total weight is made from metallic which 12 elements are identified to be highly hazardous and 12 are less harmful. Different research and methods have been used to estimate the obsolete mobile phones but none has developed a dynamic model and handle the discrepancy resulting from improper approach and error in the input data. The study aim was to develop a comprehensive dynamic system model for simulating the dynamism of e-waste production from mobile phone regardless the country or region and prevail over the previous errors. The logistic model method combined with STELLA program has been used to carry out this study. Then the simulation for Rwanda has been conducted and compared with others countries’ results as model testing and validation. Rwanda is about 1.5 million obsoletes mobile phone with 125 tons of waste in 2014 with e-waste production peak in 2017. It is expected to be 4.17 million obsoletes with 351.97 tons by 2020 along with environmental impact intensity of 21times to 2005. Thus, it is concluded through the model testing and validation that the present dynamic model is competent and able deal with mobile phone e-waste production the fact that it has responded to the previous studies questions from Czech Republic, Iran, and China.

Keywords: carrying capacity, dematerialization, logistic model, mobile phone, obsolescence, similarity, Stella, system dynamics

Procedia PDF Downloads 332
31242 Influence of Wind Induced Fatigue Damage in the Reliability of Wind Turbines

Authors: Emilio A. Berny-Brandt, Sonia E. Ruiz

Abstract:

Steel tubular towers serving as support structures for large wind turbines are subject to several hundred million stress cycles arising from the turbulent nature of the wind. This causes high-cycle fatigue which can govern tower design. The practice of maintaining the support structure after wind turbines reach its typical 20-year design life have become common, but without quantifying the changes in the reliability on the tower. There are several studies on this topic, but most of them are based on the S-N curve approach using the Miner’s rule damage summation method, the de-facto standard in the wind industry. However, the qualitative nature of Miner’s method makes desirable the use of fracture mechanics to measure the effects of fatigue in the capacity curve of the structure, which is important in order to evaluate the integrity and reliability of these towers. Temporal and spatially varying wind speed time histories are simulated based on power spectral density and coherence functions. Simulations are then applied to a SAP2000 finite element model and step-by-step analysis is used to obtain the stress time histories for a range of representative wind speeds expected during service conditions of the wind turbine. Rainflow method is then used to obtain cycle and stress range information of each of these time histories and a statistical analysis is performed to obtain the distribution parameters of each variable. Monte Carlo simulation is used here to evaluate crack growth over time in the tower base using the Paris-Erdogan equation. A nonlinear static pushover analysis to assess the capacity curve of the structure after a number of years is performed. The capacity curves are then used to evaluate the changes in reliability of a steel tower located in Oaxaca, Mexico, where wind energy facilities are expected to grow in the near future. Results show that fatigue on the tower base can have significant effects on the structural capacity of the wind turbine, especially after the 20-year design life when the crack growth curve starts behaving exponentially.

Keywords: crack growth, fatigue, Monte Carlo simulation, structural reliability, wind turbines

Procedia PDF Downloads 508
31241 Design of Target Selection for Pedestrian Autonomous Emergency Braking System

Authors: Tao Song, Hao Cheng, Guangfeng Tian, Chuang Xu

Abstract:

An autonomous emergency braking system is an advanced driving assistance system that enables vehicle collision avoidance and pedestrian collision avoidance to improve vehicle safety. At present, because the pedestrian target is small, and the mobility is large, the pedestrian AEB system is faced with more technical difficulties and higher functional requirements. In this paper, a method of pedestrian target selection based on a variable width funnel is proposed. Based on the current position and predicted position of pedestrians, the relative position of vehicle and pedestrian at the time of collision is calculated, and different braking strategies are adopted according to the hazard level of pedestrian collisions. In the CNCAP standard operating conditions, comparing the method of considering only the current position of pedestrians and the method of considering pedestrian prediction position, as well as the method based on fixed width funnel and variable width funnel, the results show that, based on variable width funnel, the choice of pedestrian target will be more accurate and the opportunity of the intervention of AEB system will be more reasonable by considering the predicted position of the pedestrian target and vehicle's lateral motion.

Keywords: automatic emergency braking system, pedestrian target selection, TTC, variable width funnel

Procedia PDF Downloads 149
31240 Using Genetic Algorithms to Outline Crop Rotations and a Cropping-System Model

Authors: Nicolae Bold, Daniel Nijloveanu

Abstract:

The idea of cropping-system is a method used by farmers. It is an environmentally-friendly method, protecting the natural resources (soil, water, air, nutritive substances) and increase the production at the same time, taking into account some crop particularities. The combination of this powerful method with the concepts of genetic algorithms results into a possibility of generating sequences of crops in order to form a rotation. The usage of this type of algorithms has been efficient in solving problems related to optimization and their polynomial complexity allows them to be used at solving more difficult and various problems. In our case, the optimization consists in finding the most profitable rotation of cultures. One of the expected results is to optimize the usage of the resources, in order to minimize the costs and maximize the profit. In order to achieve these goals, a genetic algorithm was designed. This algorithm ensures the finding of several optimized solutions of cropping-systems possibilities which have the highest profit and, thus, which minimize the costs. The algorithm uses genetic-based methods (mutation, crossover) and structures (genes, chromosomes). A cropping-system possibility will be considered a chromosome and a crop within the rotation is a gene within a chromosome. Results about the efficiency of this method will be presented in a special section. The implementation of this method would bring benefits into the activity of the farmers by giving them hints and helping them to use the resources efficiently.

Keywords: chromosomes, cropping, genetic algorithm, genes

Procedia PDF Downloads 417
31239 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 188
31238 A Review on Big Data Movement with Different Approaches

Authors: Nay Myo Sandar

Abstract:

With the growth of technologies and applications, a large amount of data has been producing at increasing rate from various resources such as social media networks, sensor devices, and other information serving devices. This large collection of massive, complex and exponential growth of dataset is called big data. The traditional database systems cannot store and process such data due to large and complexity. Consequently, cloud computing is a potential solution for data storage and processing since it can provide a pool of resources for servers and storage. However, moving large amount of data to and from is a challenging issue since it can encounter a high latency due to large data size. With respect to big data movement problem, this paper reviews the literature of previous works, discusses about research issues, finds out approaches for dealing with big data movement problem.

Keywords: Big Data, Cloud Computing, Big Data Movement, Network Techniques

Procedia PDF Downloads 71
31237 Formulation of Corrector Methods from 3-Step Hybid Adams Type Methods for the Solution of First Order Ordinary Differential Equation

Authors: Y. A. Yahaya, Ahmad Tijjani Asabe

Abstract:

This paper focuses on the formulation of 3-step hybrid Adams type method for the solution of first order differential equation (ODE). The methods which was derived on both grid and off grid points using multistep collocation schemes and also evaluated at some points to produced Block Adams type method and Adams moulton method respectively. The method with the highest order was selected to serve as the corrector. The convergence was valid and efficient. The numerical experiments were carried out and reveal that hybrid Adams type methods performed better than the conventional Adams moulton method.

Keywords: adam-moulton type (amt), corrector method, off-grid, block method, convergence analysis

Procedia PDF Downloads 616
31236 Towards Sustainable Concrete: Maturity Method to Evaluate the Effect of Curing Conditions on the Strength Development in Concrete Structures under Kuwait Environmental Conditions

Authors: F. Al-Fahad, J. Chakkamalayath, A. Al-Aibani

Abstract:

Conventional methods of determination of concrete strength under controlled laboratory conditions will not accurately represent the actual strength of concrete developed under site curing conditions. This difference in strength measurement will be more in the extreme environment in Kuwait as it is characterized by hot marine environment with normal temperature in summer exceeding 50°C accompanied by dry wind in desert areas and salt laden wind on marine and on shore areas. Therefore, it is required to have test methods to measure the in-place properties of concrete for quality assurance and for the development of durable concrete structures. The maturity method, which defines the strength of a given concrete mix as a function of its age and temperature history, is an approach for quality control for the production of sustainable and durable concrete structures. The unique harsh environmental conditions in Kuwait make it impractical to adopt experiences and empirical equations developed from the maturity methods in other countries. Concrete curing, especially in the early age plays an important role in developing and improving the strength of the structure. This paper investigates the use of maturity method to assess the effectiveness of three different types of curing methods on the compressive and flexural strength development of one high strength concrete mix of 60 MPa produced with silica fume. This maturity approach was used to predict accurately, the concrete compressive and flexural strength at later ages under different curing conditions. Maturity curves were developed for compressive and flexure strengths for a commonly used concrete mix in Kuwait, which was cured using three different curing conditions, including water curing, external spray coating and the use of internal curing compound during concrete mixing. It was observed that the maturity curve developed for the same mix depends on the type of curing conditions. It can be used to predict the concrete strength under different exposure and curing conditions. This study showed that concrete curing with external spray curing method cannot be recommended to use as it failed to aid concrete in reaching accepted values of strength, especially for flexural strength. Using internal curing compound lead to accepted levels of strength when compared with water cuing. Utilization of the developed maturity curves will help contactors and engineers to determine the in-place concrete strength at any time, and under different curing conditions. This will help in deciding the appropriate time to remove the formwork. The reduction in construction time and cost has positive impacts towards sustainable construction.

Keywords: curing, durability, maturity, strength

Procedia PDF Downloads 296