Search results for: hybrid extragradient method
17365 Increasing Efficiency of Own Used Fuel Gas by “LOTION” Method in Generating Systems PT. Pertamina EP Cepu Donggi Matindok Field in Central Sulawesi Province, Indonesia
Authors: Ridwan Kiay Demak, Firmansyahrullah, Muchammad Sibro Mulis, Eko Tri Wasisto, Nixon Poltak Frederic, Agung Putu Andika, Lapo Ajis Kamamu, Muhammad Sobirin, Kornelius Eppang
Abstract:
PC Prove LSM successfully improved the efficiency of Own Used Fuel Gas with the "Lotion" method in the PT Pertamina EP Cepu Donggi Matindok Generating System. The innovation of using the "LOTION" (LOAD PRIORITY SELECTION) method in the generating system is modeling that can provide a priority qualification of main and non-main equipment to keep gas processing running even though it leaves 1 GTG operating. GTG operating system has been integrated, controlled, and monitored properly through PC programs and web-based access to answer Industry 4.0 problems. The results of these improvements have succeeded in making Donggi Matindok Field Production reach 98.77 MMSCFD and become a proper EMAS candidate in 2022-2023. Additional revenue from increasing the efficiency of the use of own used gas amounting to USD USD 5.06 Million per year and reducing operational costs from maintenance efficiency (ABO) due to saving running hours GTG amounted to USD 3.26 Million per year. Continuity of fuel gas availability for the GTG generation system can maintain the operational reliability of the plant, which is 3.833333 MMSCFD. And reduced gas emissions wasted to the environment by 33,810 tons of C02 eq per year.Keywords: LOTION method, load priority selection, fuel gas efficiency, gas turbine generator, reduce emissions
Procedia PDF Downloads 6517364 Cytogenetic Investigation of Patients with Disorder of Sexual Development Using G-Banding Karyotype and Fluorescence In situ Hybridization
Authors: Riksa Parikrama, Bremmy Laksono, Dadang S. H. Effendi
Abstract:
Disorder of sexual development (DSD) covers various conditions with a specific term such as Klinefelter syndrome, Turner syndrome, androgen insensitivity syndrome, and many more. The techniques to accurately diagnose those conditions has developed extensively. However, conventional karyotype and fluorescence in situ hybridization (FISH) are still widely used in many genetic laboratories as the basic method to determine chromosomal condition of DSD patients. Cytogenetic study was conducted on 36 DSD patients in Cell Culture and Cytogenetics Laboratory, Faculty of Medicine Universitas Padjadjaran, Indonesia. Most of the patients referred to the laboratory diagnosed with primary amenorrhea, hypospadias, micropenis, genitalia ambiguity, or congenital adrenal hyperplasia. The study used G-banding technique to acquire complete karyotype and followed by FISH as either confirmation or comparison method. Among 36 patients, G-banding karyotype and FISH results showed that two were diagnosed with 45, X (Turner syndrome); three with 47, XXY (Klinefelter syndrome); five with 46, XX DSD; 22 with 46, XY DSD; and four with 46,XY complete androgen insensitivity syndrome. G-banding karyotype analysis were paired with FISH using X and Y chromosome probe produced similar results. The present analysis showed that FISH is a reliable method to attain a rapid and accurate chromosome analysis result of DSD patients. Nevertheless, conventional karyotype technique is still vital if other condition appeared in DSD patients in order to get more detailed karyotype result which FISH method cannot achieve.Keywords: chromosome, DSD, FISH, karyotype
Procedia PDF Downloads 22717363 Labview-Based System for Fiber Links Events Detection
Authors: Bo Liu, Qingshan Kong, Weiqing Huang
Abstract:
With the rapid development of modern communication, diagnosing the fiber-optic quality and faults in real-time is widely focused. In this paper, a Labview-based system is proposed for fiber-optic faults detection. The wavelet threshold denoising method combined with Empirical Mode Decomposition (EMD) is applied to denoise the optical time domain reflectometer (OTDR) signal. Then the method based on Gabor representation is used to detect events. Experimental measurements show that signal to noise ratio (SNR) of the OTDR signal is improved by 1.34dB on average, compared with using the wavelet threshold denosing method. The proposed system has a high score in event detection capability and accuracy. The maximum detectable fiber length of the proposed Labview-based system can be 65km.Keywords: empirical mode decomposition, events detection, Gabor transform, optical time domain reflectometer, wavelet threshold denoising
Procedia PDF Downloads 12617362 Rounded-off Measurements and Their Implication on Control Charts
Authors: Ran Etgar
Abstract:
The process of rounding off measurements in continuous variables is commonly encountered. Although it usually has minor effects, sometimes it can lead to poor outcomes in statistical process control using X ̅-chart. The traditional control limits can cause incorrect conclusions if applied carelessly. This study looks into the limitations of classical control limits, particularly the impact of asymmetry. An approach to determining the distribution function of the measured parameter (Y ̅) is presented, resulting in a more precise method to establish the upper and lower control limits. The proposed method, while slightly more complex than Shewhart's original idea, is still user-friendly and accurate and only requires the use of two straightforward tables.Keywords: inaccurate measurement, SPC, statistical process control, rounded-off, control chart
Procedia PDF Downloads 4617361 A Generative Adversarial Framework for Bounding Confounded Causal Effects
Authors: Yaowei Hu, Yongkai Wu, Lu Zhang, Xintao Wu
Abstract:
Causal inference from observational data is receiving wide applications in many fields. However, unidentifiable situations, where causal effects cannot be uniquely computed from observational data, pose critical barriers to applying causal inference to complicated real applications. In this paper, we develop a bounding method for estimating the average causal effect (ACE) under unidentifiable situations due to hidden confounders. We propose to parameterize the unknown exogenous random variables and structural equations of a causal model using neural networks and implicit generative models. Then, with an adversarial learning framework, we search the parameter space to explicitly traverse causal models that agree with the given observational distribution and find those that minimize or maximize the ACE to obtain its lower and upper bounds. The proposed method does not make any assumption about the data generating process and the type of the variables. Experiments using both synthetic and real-world datasets show the effectiveness of the method.Keywords: average causal effect, hidden confounding, bound estimation, generative adversarial learning
Procedia PDF Downloads 19617360 An Intelligent Thermal-Aware Task Scheduler in Multiprocessor System on a Chip
Authors: Sina Saadati
Abstract:
Multiprocessors Systems-On-Chips (MPSOCs) are used widely on modern computers to execute sophisticated software and applications. These systems include different processors for distinct aims. Most of the proposed task schedulers attempt to improve energy consumption. In some schedulers, the processor's temperature is considered to increase the system's reliability and performance. In this research, we have proposed a new method for thermal-aware task scheduling which is based on an artificial neural network (ANN). This method enables us to consider a variety of factors in the scheduling process. Some factors like ambient temperature, season (which is important for some embedded systems), speed of the processor, computing type of tasks and have a complex relationship with the final temperature of the system. This Issue can be solved using a machine learning algorithm. Another point is that our solution makes the system intelligent So that It can be adaptive. We have also shown that the computational complexity of the proposed method is cheap. As a consequence, It is also suitable for battery-powered systems.Keywords: task scheduling, MOSOC, artificial neural network, machine learning, architecture of computers, artificial intelligence
Procedia PDF Downloads 10617359 Energy Consumption Statistic of Gas-Solid Fluidized Beds through Computational Fluid Dynamics-Discrete Element Method Simulations
Authors: Lei Bi, Yunpeng Jiao, Chunjiang Liu, Jianhua Chen, Wei Ge
Abstract:
Two energy paths are proposed from thermodynamic viewpoints. Energy consumption means total power input to the specific system, and it can be decomposed into energy retention and energy dissipation. Energy retention is the variation of accumulated mechanical energy in the system, and energy dissipation is the energy converted to heat by irreversible processes. Based on the Computational Fluid Dynamics-Discrete Element Method (CFD-DEM) framework, different energy terms are quantified from the specific flow elements of fluid cells and particles as well as their interactions with the wall. Direct energy consumption statistics are carried out for both cold and hot flow in gas-solid fluidization systems. To clarify the statistic method, it is necessary to identify which system is studied: the particle-fluid system or the particle sub-system. For the cold flow, the total energy consumption of the particle sub-system can predict the onset of bubbling and turbulent fluidization, while the trends of local energy consumption can reflect the dynamic evolution of mesoscale structures. For the hot flow, different heat transfer mechanisms are analyzed, and the original solver is modified to reproduce the experimental results. The influence of the heat transfer mechanisms and heat source on energy consumption is also investigated. The proposed statistic method has proven to be energy-conservative and easy to conduct, and it is hopeful to be applied to other multiphase flow systems.Keywords: energy consumption statistic, gas-solid fluidization, CFD-DEM, regime transition, heat transfer mechanism
Procedia PDF Downloads 7417358 Climate Adaptive Building Shells for Plus-Energy-Buildings, Designed on Bionic Principles
Authors: Andreas Hammer
Abstract:
Six peculiar architecture designs from the Frankfurt University will be discussed within this paper and their future potential of the adaptable and solar thin-film sheets implemented facades will be shown acting and reacting on climate/solar changes of their specific sites. The different aspects, as well as limitations with regard to technical and functional restrictions, will be named. The design process for a “multi-purpose building”, a “high-rise building refurbishment” and a “biker’s lodge” on the river Rheine valley, has been critically outlined and developed step by step from an international studentship towards an overall energy strategy, that firstly had to push the design to a plus-energy building and secondly had to incorporate bionic aspects into the building skins design. Both main parameters needed to be reviewed and refined during the whole design process. Various basic bionic approaches have been given [e.g. solar ivyᵀᴹ, flectofinᵀᴹ or hygroskinᵀᴹ, which were to experiment with, regarding the use of bendable photovoltaic thin film elements being parts of a hybrid, kinetic façade system.Keywords: bionic and bioclimatic design, climate adaptive building shells [CABS], energy-strategy, harvesting façade, high-efficiency building skin, photovoltaic in building skins, plus-energy-buildings, solar gain, sustainable building concept
Procedia PDF Downloads 43317357 Oil Recovery Study by Low Temperature Carbon Dioxide Injection in High-Pressure High-Temperature Micromodels
Authors: Zakaria Hamdi, Mariyamni Awang
Abstract:
For the past decades, CO2 flooding has been used as a successful method for enhanced oil recovery (EOR). However, high mobility ratio and fingering effect are considered as important drawbacka of this process. Low temperature injection of CO2 into high temperature reservoirs may improve the oil recovery, but simulating multiphase flow in the non-isothermal medium is difficult, and commercial simulators are very unstable in these conditions. Furthermore, to best of authors’ knowledge, no experimental work was done to verify the results of the simulations and to understand the pore-scale process. In this paper, we present results of investigations on injection of low temperature CO2 into a high-pressure high-temperature micromodel with injection temperature range from 34 to 75 °F. Effect of temperature and saturation changes of different fluids are measured in each case. The results prove the proposed method. The injection of CO2 at low temperatures increased the oil recovery in high temperature reservoirs significantly. Also, CO2 rich phases available in the high temperature system can affect the oil recovery through the better sweep of the oil which is initially caused by penetration of LCO2 inside the system. Furthermore, no unfavorable effect was detected using this method. Low temperature CO2 is proposed to be used as early as secondary recovery.Keywords: enhanced oil recovery, CO₂ flooding, micromodel studies, miscible flooding
Procedia PDF Downloads 35617356 Geoecological Problems of Karst Waters in Chiatura Municipality, Georgia
Authors: Liana Khandolishvili, Giorgi Dvalashvili
Abstract:
Karst waters in the world play an important role in the water supply. Among them, the Vaucluse in Chiatura municipality (Georgia) is used as drinking water and is irreplaceable for the local population. Accordingly, it is important to assess their geo-ecological conditions and take care to maintain sustainability. The aim of the paper is to identify the hazards of pollution of underground waters in the karst environment and to develop a scheme for their protection, which will take into consideration both the hydrogeological characteristics and the role of humans. To achieve this goal, the EPIK method was selected using which an epikarst zone of the study area was studied in detail, as well as the protective cover, infiltration conditions and frequency of karst network development, after which the conditions of karst waters in Chiatura municipality was assessed, their main pollutants were identified and the recommendations were prepared for their protection. The results of the study showed that the karst water pollution rate in Chiatura municipality is highest, where karst-fissured layers are represented and intensive extraction works are underway. The EPIK method is innovative in Georgia and was first introduced on the example of karst waters of Chiatura municipality.Keywords: cave, EPIK method, pollution, Karst waters, geology, geography, ecology
Procedia PDF Downloads 9817355 VISMA: A Method for System Analysis in Early Lifecycle Phases
Authors: Walter Sebron, Hans Tschürtz, Peter Krebs
Abstract:
The choice of applicable analysis methods in safety or systems engineering depends on the depth of knowledge about a system, and on the respective lifecycle phase. However, the analysis method chain still shows gaps as it should support system analysis during the lifecycle of a system from a rough concept in pre-project phase until end-of-life. This paper’s goal is to discuss an analysis method, the VISSE Shell Model Analysis (VISMA) method, which aims at closing the gap in the early system lifecycle phases, like the conceptual or pre-project phase, or the project start phase. It was originally developed to aid in the definition of the system boundary of electronic system parts, like e.g. a control unit for a pump motor. Furthermore, it can be also applied to non-electronic system parts. The VISMA method is a graphical sketch-like method that stratifies a system and its parts in inner and outer shells, like the layers of an onion. It analyses a system in a two-step approach, from the innermost to the outermost components followed by the reverse direction. To ensure a complete view of a system and its environment, the VISMA should be performed by (multifunctional) development teams. To introduce the method, a set of rules and guidelines has been defined in order to enable a proper shell build-up. In the first step, the innermost system, named system under consideration (SUC), is selected, which is the focus of the subsequent analysis. Then, its directly adjacent components, responsible for providing input to and receiving output from the SUC, are identified. These components are the content of the first shell around the SUC. Next, the input and output components to the components in the first shell are identified and form the second shell around the first one. Continuing this way, shell by shell is added with its respective parts until the border of the complete system (external border) is reached. Last, two external shells are added to complete the system view, the environment and the use case shell. This system view is also stored for future use. In the second step, the shells are examined in the reverse direction (outside to inside) in order to remove superfluous components or subsystems. Input chains to the SUC, as well as output chains from the SUC are described graphically via arrows, to highlight functional chains through the system. As a result, this method offers a clear and graphical description and overview of a system, its main parts and environment; however, the focus still remains on a specific SUC. It helps to identify the interfaces and interfacing components of the SUC, as well as important external interfaces of the overall system. It supports the identification of the first internal and external hazard causes and causal chains. Additionally, the method promotes a holistic picture and cross-functional understanding of a system, its contributing parts, internal relationships and possible dangers within a multidisciplinary development team.Keywords: analysis methods, functional safety, hazard identification, system and safety engineering, system boundary definition, system safety
Procedia PDF Downloads 22817354 Iron Recovery from Red Mud As Zero-Valent Iron Metal Powder Using Direct Electrochemical Reduction Method
Authors: Franky Michael Hamonangan Siagian, Affan Maulana, Himawan Tri Bayu Murti Petrus, Panut Mulyono, Widi Astuti
Abstract:
In this study, the feasibility of the direct electrowinning method was used to produce zero-valent iron from red mud. The bauxite residue sample came from the Tayan mine, Indonesia, which contains high hematite (Fe₂O₃). Before electrolysis, the samples were characterized by various analytical techniques (ICP-AES, SEM, XRD) to determine their chemical composition and mineralogy. The direct electrowinning method of red mud suspended in NaOH was introduced at low temperatures ranging from 30 - 110 °C. Variations of current density, red mud: NaOH ratio and temperature were carried out to determine the optimum operation of the direct electrowinning process. Cathode deposits and residues in electrochemical cells were analyzed using XRD, XRF, and SEM to determine the chemical composition and current recovery. The low-temperature electrolysis current efficiency on Redmud can reach 20% recovery at a current density of 920,945 A/m². The moderate performance of the process was investigated with red mud, which was attributed to the troublesome adsorption of red mud particles on the cathode, making the reduction far less efficient than that with hematite.Keywords: alumina, red mud, electrochemical reduction, iron production
Procedia PDF Downloads 8317353 Sequential Covering Algorithm for Nondifferentiable Global Optimization Problem and Applications
Authors: Mohamed Rahal, Djaouida Guetta
Abstract:
In this paper, the one-dimensional unconstrained global optimization problem of continuous functions satifying a Hölder condition is considered. We extend the algorithm of sequential covering SCA for Lipschitz functions to a large class of Hölder functions. The convergence of the method is studied and the algorithm can be applied to systems of nonlinear equations. Finally, some numerical examples are presented and illustrate the efficiency of the present approach.Keywords: global optimization, Hölder functions, sequential covering method, systems of nonlinear equations
Procedia PDF Downloads 37517352 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 15317351 Comparison between Hardy-Cross Method and Water Software to Solve a Pipe Networking Design Problem for a Small Town
Authors: Ahmed Emad Ahmed, Zeyad Ahmed Hussein, Mohamed Salama Afifi, Ahmed Mohammed Eid
Abstract:
Water has a great importance in life. In order to deliver water from resources to the users, many procedures should be taken by the water engineers. One of the main procedures to deliver water to the community is by designing pressurizer pipe networks for water. The main aim of this work is to calculate the water demand of a small town and then design a simple water network to distribute water resources among the town with the smallest losses. Literature has been mentioned to cover the main point related to water distribution. Moreover, the methodology has introduced two approaches to solve the research problem, one by the iterative method of Hardy-cross and the other by water software Pipe Flow. The results have introduced two main designs to satisfy the same research requirements. Finally, the researchers have concluded that the use of water software provides more abilities and options for water engineers.Keywords: looping pipe networks, hardy cross networks accuracy, relative error of hardy cross method
Procedia PDF Downloads 17117350 Analysis and Re-Design Ergonomic Mineral Water Gallon Trolley
Authors: Dessy Laksyana Utami
Abstract:
Manual material handling activities often make it difficult for humans to work like this. Muscle injury due to incorrect posture.Workers need to facilitate their activities. One tool to assist their activities in the transportation of ordinary materials is a trolley. This tool is very useful because it can be used.It can bring many items without having to spend more energy to operate it. Very Comfortable used a trolley in the community. But the old design still have a complaint by worker, because lack of grip and capacity. After posture analysis with the REBA method, the value of risk need to be increased is obtained tool. Re design use Indonesian anthropometric data with the 50th percentile.Keywords: Material Handling, REBA method, postural assessment, Trolley.
Procedia PDF Downloads 14217349 The Soliton Solution of the Quadratic-Cubic Nonlinear Schrodinger Equation
Authors: Sarun Phibanchon, Yuttakarn Rattanachai
Abstract:
The quadratic-cubic nonlinear Schrodinger equation can be explained the weakly ion-acoustic waves in magnetized plasma with a slightly non-Maxwellian electron distribution by using the Madelung's fluid picture. However, the soliton solution to the quadratic-cubic nonlinear Schrodinger equation is determined by using the direct integration. By the characteristics of a soliton, the solution can be claimed that it's a soliton by considering its time evolution and their collisions between two solutions. These results are shown by applying the spectral method.Keywords: soliton, ion-acoustic waves, plasma, spectral method
Procedia PDF Downloads 41417348 A Development of Personalized Edutainment Contents through Storytelling
Authors: Min Kyeong Cha, Ju Yeon Mun, Seong Baeg Kim
Abstract:
Recently, ‘play of learning’ became important and is emphasized as a useful learning tool. Therefore, interest in edutainment contents is growing. Storytelling is considered first as a method that improves the transmission of information and learner's interest when planning edutainment contents. In this study, we designed edutainment contents in the form of an adventure game that applies the storytelling method. This content provides questions and items constituted dynamically and reorganized learning contents through analysis of test results. It allows learners to solve various questions through effective iterative learning. As a result, the learners can reach mastery learning.Keywords: storytelling, edutainment, mastery learning, computer operating principle
Procedia PDF Downloads 32217347 Cuckoo Search (CS) Optimization Algorithm for Solving Constrained Optimization
Authors: Sait Ali Uymaz, Gülay Tezel
Abstract:
This paper presents the comparison results on the performance of the Cuckoo Search (CS) algorithm for constrained optimization problems. For constraint handling, CS algorithm uses penalty method. CS algorithm is tested on thirteen well-known test problems and the results obtained are compared to Particle Swarm Optimization (PSO) algorithm. Mean, best, median and worst values were employed for the analyses of performance.Keywords: cuckoo search, particle swarm optimization, constrained optimization problems, penalty method
Procedia PDF Downloads 56217346 Epileptic Seizure Prediction by Exploiting Signal Transitions Phenomena
Authors: Mohammad Zavid Parvez, Manoranjan Paul
Abstract:
A seizure prediction method is proposed by extracting global features using phase correlation between adjacent epochs for detecting relative changes and local features using fluctuation/deviation within an epoch for determining fine changes of different EEG signals. A classifier and a regularization technique are applied for the reduction of false alarms and improvement of the overall prediction accuracy. The experiments show that the proposed method outperforms the state-of-the-art methods and provides high prediction accuracy (i.e., 97.70%) with low false alarm using EEG signals in different brain locations from a benchmark data set.Keywords: Epilepsy, seizure, phase correlation, fluctuation, deviation.
Procedia PDF Downloads 46717345 Comparison of Power Generation Status of Photovoltaic Systems under Different Weather Conditions
Authors: Zhaojun Wang, Zongdi Sun, Qinqin Cui, Xingwan Ren
Abstract:
Based on multivariate statistical analysis theory, this paper uses the principal component analysis method, Mahalanobis distance analysis method and fitting method to establish the photovoltaic health model to evaluate the health of photovoltaic panels. First of all, according to weather conditions, the photovoltaic panel variable data are classified into five categories: sunny, cloudy, rainy, foggy, overcast. The health of photovoltaic panels in these five types of weather is studied. Secondly, a scatterplot of the relationship between the amount of electricity produced by each kind of weather and other variables was plotted. It was found that the amount of electricity generated by photovoltaic panels has a significant nonlinear relationship with time. The fitting method was used to fit the relationship between the amount of weather generated and the time, and the nonlinear equation was obtained. Then, using the principal component analysis method to analyze the independent variables under five kinds of weather conditions, according to the Kaiser-Meyer-Olkin test, it was found that three types of weather such as overcast, foggy, and sunny meet the conditions for factor analysis, while cloudy and rainy weather do not satisfy the conditions for factor analysis. Therefore, through the principal component analysis method, the main components of overcast weather are temperature, AQI, and pm2.5. The main component of foggy weather is temperature, and the main components of sunny weather are temperature, AQI, and pm2.5. Cloudy and rainy weather require analysis of all of their variables, namely temperature, AQI, pm2.5, solar radiation intensity and time. Finally, taking the variable values in sunny weather as observed values, taking the main components of cloudy, foggy, overcast and rainy weather as sample data, the Mahalanobis distances between observed value and these sample values are obtained. A comparative analysis was carried out to compare the degree of deviation of the Mahalanobis distance to determine the health of the photovoltaic panels under different weather conditions. It was found that the weather conditions in which the Mahalanobis distance fluctuations ranged from small to large were: foggy, cloudy, overcast and rainy.Keywords: fitting, principal component analysis, Mahalanobis distance, SPSS, MATLAB
Procedia PDF Downloads 15017344 Improving the Design of Blood Pressure and Blood Saturation Monitors
Authors: L. Parisi
Abstract:
A blood pressure monitor or sphygmomanometer can be either manual or automatic, employing respectively either the auscultatory method or the oscillometric method. The manual version of the sphygmomanometer involves an inflatable cuff with a stethoscope adopted to detect the sounds generated by the arterial walls to measure blood pressure in an artery. An automatic sphygmomanometer can be effectively used to monitor blood pressure through a pressure sensor, which detects vibrations provoked by oscillations of the arterial walls. The pressure sensor implemented in this device improves the accuracy of the measurements taken.Keywords: blood pressure, blood saturation, sensors, actuators, design improvement
Procedia PDF Downloads 46017343 Erodibility Analysis of Cikapundung Hulu: A Study Case of Mekarwangi Catchment Area
Authors: Shantosa Yudha Siswanto, Rachmat Harryanto
Abstract:
The aim of the research was to investigate the effect of land use and slope steepness on soil erodibility index. The research was conducted from September to December 2013 in Mekarwangi catchment area, sub watershed of Cikapundung Hulu, Indonesia. The study was carried out using descriptive method. Physiographic free survey method was used as survey method, it was a survey based on land physiographic appearance. Soil sampling was carried out into transect on the similarity of slope without calculating the range between points of observation. Soil samples were carried onto three classes of land use such as: forest, plantation and dry cultivation area. Each land use consists of three slope classes such as: 8-15%, 16-25%, and 26-40% class. Five samples of soil were taken from each of them, resulting 45 points of observation. The result of the research showed that type of land use and slope classes gave different effect on soil erodibility. The highest C-organic and permeability was found on forest with slope 16-25%. Slope of 8-15% with forest land use give the lowest effect on soil erodibility.Keywords: land use, slope, erodibility, erosion
Procedia PDF Downloads 25717342 A Genetic Algorithm Based Ensemble Method with Pairwise Consensus Score on Malware Cacophonous Labels
Authors: Shih-Yu Wang, Shun-Wen Hsiao
Abstract:
In the field of cybersecurity, there exists many vendors giving malware samples classified results, namely naming after the label that contains some important information which is also called AV label. Lots of researchers relay on AV labels for research. Unfortunately, AV labels are too cluttered. They do not have a fixed format and fixed naming rules because the naming results were based on each classifiers' viewpoints. A way to fix the problem is taking a majority vote. However, voting can sometimes create problems of bias. Thus, we create a novel ensemble approach which does not rely on the cacophonous naming result but depend on group identification to aggregate everyone's opinion. To achieve this purpose, we develop an scoring system called Pairwise Consensus Score (PCS) to calculate result similarity. The entire method architecture combine Genetic Algorithm and PCS to find maximum consensus in the group. Experimental results revealed that our method outperformed the majority voting by 10% in term of the score.Keywords: genetic algorithm, ensemble learning, malware family, malware labeling, AV labels
Procedia PDF Downloads 9117341 Cloning of Strawberry’s Malonyltransferase Genes and Characterisation of Their Enzymes
Authors: Xiran Wang, Johanna Trinkl, Thomas Hoffmann, Wilfried Schwab
Abstract:
Malonyltransferases (MATs) are enzymes that play a key role in the biosynthesis of secondary metabolites in plants, such as flavonoids and anthocyanins. As a kind of flavonoid-rich fruit, strawberries are an ideal model to study MATs. From Goodberry metabolome data, in the hybrid generation of 2 strawberries various, Fragaria × ananassa cv. 'Senga Sengana' and 'Candonga', we found the malonylated flavonoid concentration is significantly higher in 'Senga Sengana' compared with 'Candonga'. Therefore, we aimed to identify and characterize the malonyltransferases responsible for the different malonylated flavonoid concentrations in two different strawberry cultivars. In this study, we have found 6 MATs via genome mapping, metabolome analysis, gene cloning, and enzyme assay from strawberries, which catalyzed the malonylation of flavonoid substrates: quercetin-3-glucoside, kaempferol-3-glucoside, pelargonidin-3-glucoside, and cyanidin-3-glucoside. All four compounds reacted with FaMATs to varying degrees. These MATs have important implication into strawberries’ flavonoid biosynthesis, and also provide insights into insights into flavonoid biosynthesis, potential applications in agriculture, plant science, and pharmacy, and information on the regulation of secondary metabolism in plants.Keywords: malonyltransferase, strawberry, flavonoid biosynthesis, enzyme assay
Procedia PDF Downloads 14317340 Development and Validation of a Green Analytical Method for the Analysis of Daptomycin Injectable by Fourier-Transform Infrared Spectroscopy (FTIR)
Authors: Eliane G. Tótoli, Hérida Regina N. Salgado
Abstract:
Daptomycin is an important antimicrobial agent used in clinical practice nowadays, since it is very active against some Gram-positive bacteria that are particularly challenges for the medicine, such as methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Enterococci (VRE). The importance of environmental preservation has receiving special attention since last years. Considering the evident need to protect the natural environment and the introduction of strict quality requirements regarding analytical procedures used in pharmaceutical analysis, the industries must seek environmentally friendly alternatives in relation to the analytical methods and other processes that they follow in their routine. In view of these factors, green analytical chemistry is prevalent and encouraged nowadays. In this context, infrared spectroscopy stands out. This is a method that does not use organic solvents and, although it is formally accepted for the identification of individual compounds, also allows the quantitation of substances. Considering that there are few green analytical methods described in literature for the analysis of daptomycin, the aim of this work was the development and validation of a green analytical method for the quantification of this drug in lyophilized powder for injectable solution, by Fourier-transform infrared spectroscopy (FT-IR). Method: Translucent potassium bromide pellets containing predetermined amounts of the drug were prepared and subjected to spectrophotometric analysis in the mid-infrared region. After obtaining the infrared spectrum and with the assistance of the IR Solution software, quantitative analysis was carried out in the spectral region between 1575 and 1700 cm-1, related to a carbonyl band of the daptomycin molecule, and this band had its height analyzed in terms of absorbance. The method was validated according to ICH guidelines regarding linearity, precision (repeatability and intermediate precision), accuracy and robustness. Results and discussion: The method showed to be linear (r = 0.9999), precise (RSD% < 2.0), accurate and robust, over a concentration range from 0.2 to 0.6 mg/pellet. In addition, this technique does not use organic solvents, which is one great advantage over the most common analytical methods. This fact contributes to minimize the generation of organic solvent waste by the industry and thereby reduces the impact of its activities on the environment. Conclusion: The validated method proved to be adequate to quantify daptomycin in lyophilized powder for injectable solution and can be used for its routine analysis in quality control. In addition, the proposed method is environmentally friendly, which is in line with the global trend.Keywords: daptomycin, Fourier-transform infrared spectroscopy, green analytical chemistry, quality control, spectrometry in IR region
Procedia PDF Downloads 38217339 Nonlinear Dynamic Analysis of Base-Isolated Structures Using a Mixed Integration Method: Stability Aspects and Computational Efficiency
Authors: Nicolò Vaiana, Filip C. Filippou, Giorgio Serino
Abstract:
In order to reduce numerical computations in the nonlinear dynamic analysis of seismically base-isolated structures, a Mixed Explicit-Implicit time integration Method (MEIM) has been proposed. Adopting the explicit conditionally stable central difference method to compute the nonlinear response of the base isolation system, and the implicit unconditionally stable Newmark’s constant average acceleration method to determine the superstructure linear response, the proposed MEIM, which is conditionally stable due to the use of the central difference method, allows to avoid the iterative procedure generally required by conventional monolithic solution approaches within each time step of the analysis. The main aim of this paper is to investigate the stability and computational efficiency of the MEIM when employed to perform the nonlinear time history analysis of base-isolated structures with sliding bearings. Indeed, in this case, the critical time step could become smaller than the one used to define accurately the earthquake excitation due to the very high initial stiffness values of such devices. The numerical results obtained from nonlinear dynamic analyses of a base-isolated structure with a friction pendulum bearing system, performed by using the proposed MEIM, are compared to those obtained adopting a conventional monolithic solution approach, i.e. the implicit unconditionally stable Newmark’s constant acceleration method employed in conjunction with the iterative pseudo-force procedure. According to the numerical results, in the presented numerical application, the MEIM does not have stability problems being the critical time step larger than the ground acceleration one despite of the high initial stiffness of the friction pendulum bearings. In addition, compared to the conventional monolithic solution approach, the proposed algorithm preserves its computational efficiency even when it is adopted to perform the nonlinear dynamic analysis using a smaller time step.Keywords: base isolation, computational efficiency, mixed explicit-implicit method, partitioned solution approach, stability
Procedia PDF Downloads 28317338 An Alternative Method for Computing Clothoids
Authors: Gerardo Casal, Miguel E. Vázquez-Méndez
Abstract:
The clothoid (also known as Cornu spiral or Euler spiral) is a curve that is characterized because its curvature is proportional to its length. This property makes that it would be widely used as transition curve for designing the layout of roads and railway tracks. In this work, from the geometrical property characterizing the clothoid, its parametric equations are obtained and two algorithms to compute it are compared. The first (classical), is widely used in Surveying Schools and it is based on the use of explicit formulas obtained from Taylor expansions of sine and cosine functions. The second one (alternative) is a very simple algorithm, based on the numerical solution of the initial value problems giving the clothoid parameterization. Both methods are compared in some typical surveying problems. The alternative method does not use complex formulas and so it is conceptually very simple and easy to apply. It gives good results, even if the classical method goes wrong (if the quotient between length and radius of curvature is high), needs no subsequent translations nor rotations and, consequently, it seems an efficient tool for designing the layout of roads and railway tracks.Keywords: transition curves, railroad and highway engineering, Runge-Kutta methods
Procedia PDF Downloads 28717337 Fuel Cells Not Only for Cars: Technological Development in Railways
Authors: Marita Pigłowska, Beata Kurc, Paweł Daszkiewicz
Abstract:
Railway vehicles are divided into two groups: traction (powered) vehicles and wagons. The traction vehicles include locomotives (line and shunting), railcars (sometimes referred to as railbuses), and multiple units (electric and diesel), consisting of several or a dozen carriages. In vehicles with diesel traction, fuel energy (petrol, diesel, or compressed gas) is converted into mechanical energy directly in the internal combustion engine or via electricity. In the latter case, the combustion engine generator produces electricity that is then used to drive the vehicle (diesel-electric drive or electric transmission). In Poland, such a solution dominates both in heavy linear and shunting locomotives. The classic diesel drive is available for the lightest shunting locomotives, railcars, and passenger diesel multiple units. Vehicles with electric traction do not have their own source of energy -they use pantographs to obtain electricity from the traction network. To determine the competitiveness of the hydrogen propulsion system, it is essential to understand how it works. The basic elements of the construction of a railway vehicle drive system that uses hydrogen as a source of traction force are fuel cells, batteries, fuel tanks, traction motors as well as main and auxiliary converters. The compressed hydrogen is stored in tanks usually located on the roof of the vehicle. This resource is supplemented with the use of specialized infrastructure while the vehicle is stationary. Hydrogen is supplied to the fuel cell, where it oxidizes. The effect of this chemical reaction is electricity and water (in two forms -liquid and water vapor). Electricity is stored in batteries (so far, lithium-ion batteries are used). Electricity stored in this way is used to drive traction motors and supply onboard equipment. The current generated by the fuel cell passes through the main converter, whose task is to adjust it to the values required by the consumers, i.e., batteries and the traction motor. The work will attempt to construct a fuel cell with unique electrodes. This research is a trend that connects industry with science. The first goal will be to obtain hydrogen on a large scale in tube furnaces, to thoroughly analyze the obtained structures (IR), and to apply the method in fuel cells. The second goal is to create low-energy energy storage and distribution station for hydrogen and electric vehicles. The scope of the research includes obtaining a carbon variety and obtaining oxide systems on a large scale using a tubular furnace and then supplying vehicles. Acknowledgments: This work is supported by the Polish Ministry of Science and Education, project "The best of the best! 4.0", number 0911/MNSW/4968 – M.P. and grant 0911/SBAD/2102—B.K.Keywords: railway, hydrogen, fuel cells, hybrid vehicles
Procedia PDF Downloads 19417336 Step Height Calibration Using Hamming Window: Band-Pass Filter
Authors: Dahi Ghareab Abdelsalam Ibrahim
Abstract:
Calibration of step heights with high accuracy is needed for many applications in the industry. In general, step height consists of three bands: pass band, transition band (roll-off), and stop band. Abdelsalam used a convolution of the transfer functions of both Chebyshev type 2 and elliptic filters with WFF of the Fresnel transform in the frequency domain for producing a steeper roll-off with the removal of ripples in the pass band- and stop-bands. In this paper, we used a new method based on the Hamming window: band-pass filter for calibration of step heights in terms of perfect adjustment of pass-band, roll-off, and stop-band. The method is applied to calibrate a nominal step height of 40 cm. The step height is measured first by asynchronous dual-wavelength phase-shift interferometry. The measured step height is then calibrated by the simulation of the Hamming window: band-pass filter. The spectrum of the simulated band-pass filter is simulated at N = 881 and f0 = 0.24. We can conclude that the proposed method can calibrate any step height by adjusting only two factors which are N and f0.Keywords: optical metrology, step heights, hamming window, band-pass filter
Procedia PDF Downloads 88