Search results for: structured tariff calculation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3964

Search results for: structured tariff calculation

3784 R Statistical Software Applied in Reliability Analysis: Case Study of Diesel Generator Fans

Authors: Jelena Vucicevic

Abstract:

Reliability analysis represents a very important task in different areas of work. In any industry, this is crucial for maintenance, efficiency, safety and monetary costs. There are ways to calculate reliability, unreliability, failure density and failure rate. This paper will try to introduce another way of calculating reliability by using R statistical software. R is a free software environment for statistical computing and graphics. It compiles and runs on a wide variety of UNIX platforms, Windows and MacOS. The R programming environment is a widely used open source system for statistical analysis and statistical programming. It includes thousands of functions for the implementation of both standard and new statistical methods. R does not limit user only to operation related only to these functions. This program has many benefits over other similar programs: it is free and, as an open source, constantly updated; it has built-in help system; the R language is easy to extend with user-written functions. The significance of the work is calculation of time to failure or reliability in a new way, using statistic. Another advantage of this calculation is that there is no need for technical details and it can be implemented in any part for which we need to know time to fail in order to have appropriate maintenance, but also to maximize usage and minimize costs. In this case, calculations have been made on diesel generator fans but the same principle can be applied to any other part. The data for this paper came from a field engineering study of the time to failure of diesel generator fans. The ultimate goal was to decide whether or not to replace the working fans with a higher quality fan to prevent future failures. Seventy generators were studied. For each one, the number of hours of running time from its first being put into service until fan failure or until the end of the study (whichever came first) was recorded. Dataset consists of two variables: hours and status. Hours show the time of each fan working and status shows the event: 1- failed, 0- censored data. Censored data represent cases when we cannot track the specific case, so it could fail or success. Gaining the result by using R was easy and quick. The program will take into consideration censored data and include this into the results. This is not so easy in hand calculation. For the purpose of the paper results from R program have been compared to hand calculations in two different cases: censored data taken as a failure and censored data taken as a success. In all three cases, results are significantly different. If user decides to use the R for further calculations, it will give more precise results with work on censored data than the hand calculation.

Keywords: censored data, R statistical software, reliability analysis, time to failure

Procedia PDF Downloads 383
3783 Delivery System Design of the Local Part to Reduce the Logistic Costs in an Automotive Industry

Authors: Alesandro Romero, Inaki Maulida Hakim

Abstract:

This research was conducted in an automotive company in Indonesia to overcome the problem of high logistics cost. The problem causes high of additional truck delivery. From the breakdown of the problem, chosen one route, which has the highest gap value, namely for RE-04. Research methodology will be started from calculating the ideal condition, making simulation, calculating the ideal logistic cost, and proposing an improvement. From the calculation of the ideal condition, box arrangement was done on the truck; the average efficiency was 97,4 % with three trucks delivery per day. Route simulation making uses Tecnomatix Plant Simulation software as a visualization for the company about how the system is occurred on route RE-04 in ideal condition. Furthermore, from the calculation of logistics cost of the ideal condition, it brings savings of Rp53.011.800,00 in a month. The last step is proposing improvements on the area of route RE-04. The route arrangement is done by Saving Method and sequence of each supplier with the Nearest Neighbor. The results of the proposed improvements are three new route groups, where was expected to decrease logistics cost Rp3.966.559,40 per day, and increase the average of the truck efficiency 8,78% per day.

Keywords: efficiency, logistic cost, milkrun, saving methode, simulation

Procedia PDF Downloads 423
3782 Evaluation of Low-Reducible Sinter in Blast Furnace Technology by Mathematical Model Developed at Centre ENET, VSB: Technical University of Ostrava

Authors: S. Jursová, P. Pustějovská, S. Brožová, J. Bilík

Abstract:

The paper deals with possibilities of interpretation of iron ore reducibility tests. It presents a mathematical model developed at Centre ENET, VŠB–Technical University of Ostrava, Czech Republic for an evaluation of metallurgical material of blast furnace feedstock such as iron ore, sinter or pellets. According to the data from the test, the model predicts its usage in blast furnace technology and its effects on production parameters of shaft aggregate. At the beginning, the paper sums up the general concept and experience in mathematical modelling of iron ore reduction. It presents basic equation for the calculation and the main parts of the developed model. In the experimental part, there is an example of usage of the mathematical model. The paper describes the usage of data for some predictive calculation. There are presented material, method of carried test of iron ore reducibility. Then there are graphically interpreted effects of used material on carbon consumption, rate of direct reduction and the whole reduction process.

Keywords: blast furnace technology, iron ore reduction, mathematical model, prediction of iron ore reduction

Procedia PDF Downloads 655
3781 Device Control Using Brain Computer Interface

Authors: P. Neeraj, Anurag Sharma, Harsukhpreet Singh

Abstract:

In current years, Brain-Computer Interface (BCI) scheme based on steady-state Visual Evoked Potential (SSVEP) have earned much consideration. This study tries to evolve an SSVEP based BCI scheme that can regulate any gadget mock-up in two unique positions ON and OFF. In this paper, two distinctive gleam frequencies in low-frequency part were utilized to evoke the SSVEPs and were shown on a Liquid Crystal Display (LCD) screen utilizing Lab View. Two stimuli shading, Yellow, and Blue were utilized to prepare the system in SSVEPs. The Electroencephalogram (EEG) signals recorded from the occipital part. Elements of the brain were separated by utilizing discrete wavelet Transform. A prominent system for multilayer system diverse Neural Network Algorithm (NNA), is utilized to characterize SSVEP signals. During training of the network with diverse calculation Regression plot results demonstrated that when Levenberg-Marquardt preparing calculation was utilized the exactness turns out to be 93.9%, which is superior to another training algorithm.

Keywords: brain computer interface, electroencephalography, steady-state visual evoked potential, wavelet transform, neural network

Procedia PDF Downloads 317
3780 Vector Quantization Based on Vector Difference Scheme for Image Enhancement

Authors: Biji Jacob

Abstract:

Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.

Keywords: codebook, image enhancement, vector difference, vector quantization

Procedia PDF Downloads 247
3779 Screen Method of Distributed Cooperative Navigation Factors for Unmanned Aerial Vehicle Swarm

Authors: Can Zhang, Qun Li, Yonglin Lei, Zhi Zhu, Dong Guo

Abstract:

Aiming at the problem of factor screen in distributed collaborative navigation of dense UAV swarm, an efficient distributed collaborative navigation factor screen method is proposed. The method considered the balance between computing load and positioning accuracy. The proposed algorithm utilized the factor graph model to implement a distributed collaborative navigation algorithm. The GNSS information of the UAV itself and the ranging information between the UAVs are used as the positioning factors. In this distributed scheme, a local factor graph is established for each UAV. The positioning factors of nodes with good geometric position distribution and small variance are selected to participate in the navigation calculation. To demonstrate and verify the proposed methods, the simulation and experiments in different scenarios are performed in this research. Simulation results show that the proposed scheme achieves a good balance between the computing load and positioning accuracy in the distributed cooperative navigation calculation of UAV swarm. This proposed algorithm has important theoretical and practical value for both industry and academic areas.

Keywords: screen method, cooperative positioning system, UAV swarm, factor graph, cooperative navigation

Procedia PDF Downloads 54
3778 Stress Analysis of the Ceramics Heads with Different Sizes under the Destruction Tests

Authors: V. Fuis, P. Janicek, T. Navrat

Abstract:

The global solved problem is the calculation of the parameters of ceramic material from a set of destruction tests of ceramic heads of total hip joint endoprosthesis. The standard way of calculation of the material parameters consists in carrying out a set of 3 or 4 point bending tests of specimens cut out from parts of the ceramic material to be analysed. In case of ceramic heads, it is not possible to cut out specimens of required dimensions because the heads are too small (if the cut out specimens were smaller than the normalized ones, the material parameters derived from them would exhibit higher strength values than those which the given ceramic material really has). A special destruction device for heads destruction was designed and the solved local problem is the modification of this destructive device based on the analysis of tensile stress in the head for two different values of the depth of the conical hole in the head. The goal of device modification is a shift of the location with extreme value of 1 max from the region of head’s hole bottom to its opening. This modification will increase the credibility of the obtained material properties of bio ceramics, which will be determined from a set of head destructions using the Weibull weakest link theory.

Keywords: ceramic heads, depth of the conical hole, destruction test, material parameters, principal stress, total hip joint endoprosthesis

Procedia PDF Downloads 396
3777 Thermodynamic Approach of Lanthanide-Iron Double Oxides Formation

Authors: Vera Varazashvili, Murman Tsarakhov, Tamar Mirianashvili, Teimuraz Pavlenishvili, Tengiz Machaladze, Mzia Khundadze

Abstract:

Standard Gibbs energy of formation ΔGfor(298.15) of lanthanide-iron double oxides of garnet-type crystal structure R3Fe5O12 - RIG (R – are rare earth ions) from initial oxides are evaluated. The calculation is based on the data of standard entropies S298.15 and standard enthalpies ΔH298.15 of formation of compounds which are involved in the process of garnets synthesis. Gibbs energy of formation is presented as temperature function ΔGfor(T) for the range 300-1600K. The necessary starting thermodynamic data were obtained from calorimetric study of heat capacity – temperature functions and by using the semi-empirical method for calculation of ΔH298.15 of formation. Thermodynamic functions for standard temperature – enthalpy, entropy and Gibbs energy - are recommended as reference data for technological evaluations. Through the isostructural series of rare earth-iron garnets the correlation between thermodynamic properties and characteristics of lanthanide ions are elucidated.

Keywords: calorimetry, entropy, enthalpy, heat capacity, gibbs energy of formation, rare earth iron garnets

Procedia PDF Downloads 363
3776 Standard Gibbs Energy of Formation and Entropy of Lanthanide-Iron Oxides of Garnet Crystal Structure

Authors: Vera Varazashvili, Murman Tsarakhov, Tamar Mirianashvili, Teimuraz Pavlenishvili, Tengiz Machaladze, Mzia Khundadze

Abstract:

Standard Gibbs energy of formation ΔGfor(298.15) of lanthanide-iron double oxides of garnet-type crystal structure R3Fe5O12 - RIG (R – are rare earth ions) from initial oxides are evaluated. The calculation is based on the data of standard entropies S298.15 and standard enthalpies ΔH298.15 of formation of compounds which are involved in the process of garnets synthesis. Gibbs energy of formation is presented as temperature function ΔGfor(T) for the range 300-1600K. The necessary starting thermodynamic data were obtained from calorimetric study of heat capacity and by using the semi-empirical method for calculation of ΔH298.15 (formation). Thermodynamic functions for standard temperature – enthalpy, entropy and Gibbs energy - are recommended as reference data for technological evaluations. Through the isostructural series of rare earth-iron garnets the correlation between thermodynamic properties and characteristics of lanthanide ions are elucidated.

Keywords: calorimetry, entropy, heat capacity, Gibbs energy of formation, rare earth iron garnets

Procedia PDF Downloads 338
3775 Determination of Non-CO2 Greenhouse Gas Emission in Electronics Industry

Authors: Bong Jae Lee, Jeong Il Lee, Hyo Su Kim

Abstract:

Both developed and developing countries have adopted the decision to join the Paris agreement to reduce greenhouse gas (GHG) emissions at the Conference of the Parties (COP) 21 meeting in Paris. As a result, the developed and developing countries have to submit the Intended Nationally Determined Contributions (INDC) by 2020, and each country will be assessed for their performance in reducing GHG. After that, they shall propose a reduction target which is higher than the previous target every five years. Therefore, an accurate method for calculating greenhouse gas emissions is essential to be presented as a rational for implementing GHG reduction measures based on the reduction targets. Non-CO2 GHGs (CF4, NF3, N2O, SF6 and so on) are being widely used in fabrication process of semiconductor manufacturing, and etching/deposition process of display manufacturing process. The Global Warming Potential (GWP) value of Non-CO2 is much higher than CO2, which means it will have greater effect on a global warming than CO2. Therefore, GHG calculation methods of the electronics industry are provided by Intergovernmental Panel on climate change (IPCC) and U.S. Environmental Protection Agency (EPA), and it will be discussed at ISO/TC 146 meeting. As discussed earlier, being precise and accurate in calculating Non-CO2 GHG is becoming more important. Thus this study aims to discuss the implications of the calculating methods through comparing the methods of IPCC and EPA. As a conclusion, after analyzing the methods of IPCC & EPA, the method of EPA is more detailed and it also provides the calculation for N2O. In case of the default emission factor (by IPCC & EPA), IPCC provides more conservative results compared to that of EPA; The factor of IPCC was developed for calculating a national GHG emission, while the factor of EPA was specifically developed for the U.S. which means it must have been developed to address the environmental issue of the US. The semiconductor factory ‘A’ measured F gas according to the EPA Destruction and Removal Efficiency (DRE) protocol and estimated their own DRE, and it was observed that their emission factor shows higher DRE compared to default DRE factor of IPCC and EPA Therefore, each country can improve their GHG emission calculation by developing its own emission factor (if possible) at the time of reporting Nationally Determined Contributions (NDC). Acknowledgements: This work was supported by the Korea Evaluation Institute of Industrial Technology (No. 10053589).

Keywords: non-CO2 GHG, GHG emission, electronics industry, measuring method

Procedia PDF Downloads 268
3774 An Atomic Finite Element Model for Mechanical Properties of Graphene Sheets

Authors: Win-Jin Chang, Haw-Long Lee, Yu-Ching Yang

Abstract:

In this study, we use the atomic-scale finite element method to investigate the mechanical behavior of the armchair- and zigzag-structured nanoporous graphene sheets with the clamped-free-free-free boundary condition under tension and shear loadings. The effect of porosity on Young’s modulus and shear modulus of nanoporous graphene sheets is obvious. For the armchair- and zigzag-structured nanoporous graphene sheets, Young’s modulus and shear modulus decreases with increasing porosity. Young’s modulus and shear modulus of zigzag graphene are larger than that of armchair one for the same porosity. The results are useful for application in the design of nanoporous graphene sheets.

Keywords: graphene, nanoporous, Young's modulus, shear modulus

Procedia PDF Downloads 374
3773 A New Social Vulnerability Index for Evaluating Social Vulnerability to Climate Change at the Local Scale

Authors: Cuong V Nguyen, Ralph Horne, John Fien, France Cheong

Abstract:

Social vulnerability to climate change is increasingly being acknowledged, and proposals to measure and manage it are emerging. Building upon this work, this paper proposes an approach to social vulnerability assessment using a new mechanism to aggregate and account for causal relationships among components of a Social Vulnerability Index (SVI). To operationalize this index, the authors propose a means to develop an appropriate primary dataset, through application of a specifically-designed household survey questionnaire. The data collection and analysis, including calibration and calculation of the SVI is demonstrated through application in case study city in central coastal Vietnam. The calculation of SVI at the fine-grained local neighbourhood scale provides high resolution in vulnerability assessment, and also obviates the need for secondary data, which may be unavailable or problematic, particularly at the local scale in developing countries. The SVI household survey is underpinned by the results of a Delphi survey, an in-depth interview and focus group discussions with local environmental professionals and community members. The research reveals inherent limitations of existing SVIs but also indicates the potential for their use in assessing social vulnerability and making decisions associated with responding to climate change at the local scale.

Keywords: climate change, local scale, social vulnerability, social vulnerability index

Procedia PDF Downloads 411
3772 The Use of Ontology Framework for Automation Digital Forensics Investigation

Authors: Ahmad Luthfi

Abstract:

One of the main goals of a computer forensic analyst is to determine the cause and effect of the acquisition of a digital evidence in order to obtain relevant information on the case is being handled. In order to get fast and accurate results, this paper will discuss the approach known as ontology framework. This model uses a structured hierarchy of layers that create connectivity between the variant and searching investigation of activity that a computer forensic analysis activities can be carried out automatically. There are two main layers are used, namely analysis tools and operating system. By using the concept of ontology, the second layer is automatically designed to help investigator to perform the acquisition of digital evidence. The methodology of automation approach of this research is by utilizing forward chaining where the system will perform a search against investigative steps and atomically structured in accordance with the rules of the ontology.

Keywords: ontology, framework, automation, forensics

Procedia PDF Downloads 319
3771 Collaborative Energy Optimization for Multi-Microgrid Distribution System Based on Two-Stage Game Approach

Authors: Hanmei Peng, Yiqun Wang, Mao Tan, Zhuocen Dai, Yongxin Su

Abstract:

Efficient energy management in multi-microgrid distribution systems holds significant importance for enhancing the economic benefits of regional power grids. To better balance conflicts among various stakeholders, a two-stage game-based collaborative optimization approach is proposed in this paper, effectively addressing the realistic scenario involving both competition and collaboration among stakeholders. The first stage, aimed at maximizing individual benefits, involves constructing a non-cooperative tariff game model for the distribution network and surplus microgrid. In the second stage, considering power flow and physical line capacity constraints we establish a cooperative P2P game model for the multi-microgrid distribution system, and the optimization involves employing the Lagrange method of multipliers to handle complex constraints. Simulation results demonstrate that the proposed approach can effectively improve the system economics while harmonizing individual and collective rationality.

Keywords: cooperative game, collaborative optimization, multi-microgrid distribution system, non-cooperative game

Procedia PDF Downloads 43
3770 Effects of Local Ground Conditions on Site Response Analysis Results in Hungary

Authors: Orsolya Kegyes-Brassai, Zsolt Szilvágyi, Ákos Wolf, Richard P. Ray

Abstract:

Local ground conditions have a substantial influence on the seismic response of structures. Their inclusion in seismic hazard assessment and structural design can be realized at different levels of sophistication. However, response results based on more advanced calculation methods e.g. nonlinear or equivalent linear site analysis tend to show significant discrepancies when compared to simpler approaches. This project's main objective was to compare results from several 1-D response programs to Eurocode 8 design spectra. Data from in-situ site investigations were used for assessing local ground conditions at several locations in Hungary. After discussion of the in-situ measurements and calculation methods used, a comprehensive evaluation of all major contributing factors for site response is given. While the Eurocode spectra should account for local ground conditions based on soil classification, there is a wide variation in peak ground acceleration determined from 1-D analyses versus Eurocode. Results show that current Eurocode 8 design spectra may not be conservative enough to account for local ground conditions typical for Hungary.

Keywords: 1-D site response analysis, multichannel analysis of surface waves (MASW), seismic CPT, seismic hazard assessment

Procedia PDF Downloads 233
3769 Multi Tier Data Collection and Estimation, Utilizing Queue Model in Wireless Sensor Networks

Authors: Amirhossein Mohajerzadeh, Abolghasem Mohajerzadeh

Abstract:

In this paper, target parameter is estimated with desirable precision in hierarchical wireless sensor networks (WSN) while the proposed algorithm also tries to prolong network lifetime as much as possible, using efficient data collecting algorithm. Target parameter distribution function is considered unknown. Sensor nodes sense the environment and send the data to the base station called fusion center (FC) using hierarchical data collecting algorithm. FC builds underlying phenomena based on collected data. Considering the aggregation level, x, the goal is providing the essential infrastructure to find the best value for aggregation level in order to prolong network lifetime as much as possible, while desirable accuracy is guaranteed (required sample size is fully depended on desirable precision). First, the sample size calculation algorithm is discussed, second, the average queue length based on M/M[x]/1/K queue model is determined and it is used for energy consumption calculation. Nodes can decrease transmission cost by aggregating incoming data. Furthermore, the performance of the new algorithm is evaluated in terms of lifetime and estimation accuracy.

Keywords: aggregation, estimation, queuing, wireless sensor network

Procedia PDF Downloads 166
3768 Hohmann Transfer and Bi-Elliptic Hohmann Transfer in TRAPPIST-1 System

Authors: Jorge L. Nisperuza, Wilson Sandoval, Edward. A. Gil, Johan A. Jimenez

Abstract:

In orbital mechanics, an active research topic is the calculation of interplanetary trajectories efficient in terms of energy and time. In this sense, this work concerns the calculation of the orbital elements for sending interplanetary probes in the extrasolar system TRAPPIST-1. Specifically, using the mathematical expressions of the circular and elliptical trajectory parameters, expressions for the flight time and the orbital transfer rate increase between orbits, the orbital parameters and the graphs of the trajectories of Hohmann and Hohmann bi-elliptic for sending a probe from the innermost planet to all the other planets of the studied system, are obtained. The relationship between the orbital transfer rate increments and the relationship between the flight times for the two transfer types is found. The results show that, for all cases under consideration, the Hohmann transfer results to be the least energy and temporary cost, a result according to the theory associated with Hohmann and Hohmann bi-elliptic transfers. Saving in the increase of the speed reaches up to 87% was found, and it happens for the transference between the two innermost planets, whereas the time of flight increases by a factor of up to 6.6 if one makes use of the bi-elliptic transfer, this for the case of sending a probe from the innermost planet to the outermost.

Keywords: bi-elliptic Hohmann transfer, exoplanet, extrasolar system, Hohmann transfer, TRAPPIST-1

Procedia PDF Downloads 169
3767 Statistical Characteristics of Code Formula for Design of Concrete Structures

Authors: Inyeol Paik, Ah-Ryang Kim

Abstract:

In this research, a statistical analysis is carried out to examine the statistical properties of the formula given in the design code for concrete structures. The design formulas of the Korea highway bridge design code - the limit state design method (KHBDC) which is the current national bridge design code and the design code for concrete structures by Korea Concrete Institute (KCI) are applied for the analysis. The safety levels provided by the strength formulas of the design codes are defined based on the probabilistic and statistical theory.KHBDC is a reliability-based design code. The load and resistance factors of this code were calibrated to attain the target reliability index. It is essential to define the statistical properties for the design formulas in this calibration process. In general, the statistical characteristics of a member strength are due to the following three factors. The first is due to the difference between the material strength of the actual construction and that used in the design calculation. The second is the difference between the actual dimensions of the constructed sections and those used in design calculation. The third is the difference between the strength of the actual member and the formula simplified for the design calculation. In this paper, the statistical study is focused on the third difference. The formulas for calculating the shear strength of concrete members are presented in different ways in KHBDC and KCI. In this study, the statistical properties of design formulas were obtained through comparison with the database which comprises the experimental results from the reference publications. The test specimen was either reinforced with the shear stirrup or not. For an applied database, the bias factor was about 1.12 and the coefficient of variation was about 0.18. By applying the statistical properties of the design formula to the reliability analysis, it is shown that the resistance factors of the current design codes satisfy the target reliability indexes of both codes. Also, the minimum resistance factors of the KHBDC which is written in the material resistance factor format and KCE which is in the member resistance format are obtained and the results are presented. A further research is underway to calibrate the resistance factors of the high strength and high-performance concrete design guide.

Keywords: concrete design code, reliability analysis, resistance factor, shear strength, statistical property

Procedia PDF Downloads 298
3766 Bayesian Flexibility Modelling of the Conditional Autoregressive Prior in a Disease Mapping Model

Authors: Davies Obaromi, Qin Yongsong, James Ndege, Azeez Adeboye, Akinwumi Odeyemi

Abstract:

The basic model usually used in disease mapping, is the Besag, York and Mollie (BYM) model and which combines the spatially structured and spatially unstructured priors as random effects. Bayesian Conditional Autoregressive (CAR) model is a disease mapping method that is commonly used for smoothening the relative risk of any disease as used in the Besag, York and Mollie (BYM) model. This model (CAR), which is also usually assigned as a prior to one of the spatial random effects in the BYM model, successfully uses information from adjacent sites to improve estimates for individual sites. To our knowledge, there are some unrealistic or counter-intuitive consequences on the posterior covariance matrix of the CAR prior for the spatial random effects. In the conventional BYM (Besag, York and Mollie) model, the spatially structured and the unstructured random components cannot be seen independently, and which challenges the prior definitions for the hyperparameters of the two random effects. Therefore, the main objective of this study is to construct and utilize an extended Bayesian spatial CAR model for studying tuberculosis patterns in the Eastern Cape Province of South Africa, and then compare for flexibility with some existing CAR models. The results of the study revealed the flexibility and robustness of this alternative extended CAR to the commonly used CAR models by comparison, using the deviance information criteria. The extended Bayesian spatial CAR model is proved to be a useful and robust tool for disease modeling and as a prior for the structured spatial random effects because of the inclusion of an extra hyperparameter.

Keywords: Besag2, CAR models, disease mapping, INLA, spatial models

Procedia PDF Downloads 251
3765 Decision Support for Modularisation: Engineering Construction Case Studies

Authors: Rolla Monib, Chris Ian Goodier, Alistair Gibb

Abstract:

This paper aims to investigate decision support strategies in the EC sector to determine the most appropriate degree of modularization. This is achieved through three oil and gas (O&G) and two power plant case studies via semi-structured interviews (n=59 and n=27, respectively), analysis of project documents, and case study-specific semi-structured validation interviews (n=12 and n=8). New terminology to distinguish degrees of modularization is proposed, along with a decision-making support checklist and a diagrammatic decision-making support figure. Results indicate that the EC sub-sectors were substantially more satisfied with the application of component, structural, or traditional modularization compared with system modularization for some types of modules. Key drivers for decisions on the degree of modularization vary across module types. This paper can help the EC sector determine the most suitable degree of modularization via a decision-making support strategy.

Keywords: modularization, engineering construction, case study, decision support

Procedia PDF Downloads 68
3764 Vibration Frequencies Analysis of Nanoporous Graphene Membrane

Authors: Haw-Long Lee, Win-Jin Chang, Yu-Ching Yang

Abstract:

In this study, we use the atomic-scale finite element method to investigate the vibrational behavior of the armchair- and zigzag-structured nanoporous graphene layers with different size under the SFSF and CFFF boundary conditions. The fundamental frequencies computed for the graphene layers without pore are compared with the results of previous studies. We observe very good correspondence of our results with that of the other studies in all the considered cases. For the armchair- and zigzag-structured nanoporous graphene layers under the SFSF and CFFF boundary conditions, the frequencies decrease as the size of the nanopore increase. When the positions of the pore are symmetric with respect to the center of the graphene, the frequency of the zigzag pore graphene is higher than that of the armchair one.

Keywords: atomic-scale finite element method, graphene, nanoporous, natural frequency

Procedia PDF Downloads 340
3763 Self-Attention Mechanism for Target Hiding Based on Satellite Images

Authors: Hao Yuan, Yongjian Shen, Xiangjun He, Yuheng Li, Zhouzhou Zhang, Pengyu Zhang, Minkang Cai

Abstract:

Remote sensing data can provide support for decision-making in disaster assessment or disaster relief. The traditional processing methods of sensitive targets in remote sensing mapping are mainly based on manual retrieval and image editing tools, which are inefficient. Methods based on deep learning for sensitive target hiding are faster and more flexible. But these methods have disadvantages in training time and cost of calculation. This paper proposed a target hiding model Self Attention (SA) Deepfill, which used self-attention modules to replace part of gated convolution layers in image inpainting. By this operation, the calculation amount of the model becomes smaller, and the performance is improved. And this paper adds free-form masks to the model’s training to enhance the model’s universal. The experiment on an open remote sensing dataset proved the efficiency of our method. Moreover, through experimental comparison, the proposed method can train for a longer time without over-fitting. Finally, compared with the existing methods, the proposed model has lower computational weight and better performance.

Keywords: remote sensing mapping, image inpainting, self-attention mechanism, target hiding

Procedia PDF Downloads 106
3762 The Role of Midwives in Promoting Childbearing in Respect to the Law of Population Youth in Iran

Authors: Parvin Abedi, Poorandokht Afshari

Abstract:

Introduction: In 2022, the Youth Law of the Population was notified to all organizations, including the Iranian Ministry of Health. Some of the articles of this law are about the role of midwives in health and treatment to promote childbearing. In this regard, articles number 45, 48, 49, and 50 are related to midwifery work that will be reviewed in this article. Methods: In this review, the law of population youth was reviewed. In this regard, the statistics of midwives working in the treatment and health sector were collected and analyzed according to the population youth law. Results: Nearly 47 000 midwives are working in the public and private sectors of the country and in the healthcare sector; according to Article 49, which states that there should be one midwife for every two parturient women, about 12,000 midwives are needed in the treatment department and about 8,000 midwives are needed in the health department. In Article 50 it is mentioned to modify tariffs and efficiency in order to increase natural childbirth, and in this field, insurance organizations should have sufficient cooperation with payments. The tariff for midwifery services has been increased, but it is not enough for the stressful job of midwifery. The labor incentive for delivery by midwives is also low. Conclusion: Midwives are one of the fundamental pillars of the law of the population, and without increasing the motivation of midwives, it is not possible to increase the rate of natural childbirth and make childbirth pleasant.

Keywords: law of the population, midwife, motivation, childbearing

Procedia PDF Downloads 13
3761 Magnetoelectric Coupling in Hetero-Structured Nano-Composite of BST-BLFM Films

Authors: Navneet Dabra, Jasbir S. HUndal

Abstract:

Hetero-structured nano-composite thin film of Ba0.5Sr0.5TiO3/Bi0.9La0.1Fe0.9Mn0.1O3 (BST/BLFM) has been prepared by chemical solution deposition method with various BST to BLFM thickness ratios. These films have been deposited over on p-type Si (100) substrate. These samples exhibited low leakage current, large grain size and uniform distribution of particles. The maximum remanent polarization (Pr) was achieved in the heterostructures with thickness ratio of 2.65. The dielectric tenability, electric hysteresis (P-E), ME coupling coefficient, magnetic hysteresis (M-H), ferromagnetic exchange interaction and magnetoelectric measurements were carried out. Field Emission Scanning Electron Microscopy has been employed to investigate the surface morphology of these heterostructured nano-composite films.

Keywords: magnetoelectric, Schottky emission, interface coupling, dielectric tenability, electric hysteresis (P-E), ME coupling coefficient, magnetic hysteresis (M-H)

Procedia PDF Downloads 410
3760 A New Intelligent, Dynamic and Real Time Management System of Sewerage

Authors: R. Tlili Yaakoubi, H.Nakouri, O. Blanpain, S. Lallahem

Abstract:

The current tools for real time management of sewer systems are based on two software tools: the software of weather forecast and the software of hydraulic simulation. The use of the first ones is an important cause of imprecision and uncertainty, the use of the second requires temporal important steps of decision because of their need in times of calculation. This way of proceeding fact that the obtained results are generally different from those waited. The major idea of this project is to change the basic paradigm by approaching the problem by the "automatic" face rather than by that "hydrology". The objective is to make possible the realization of a large number of simulations at very short times (a few seconds) allowing to take place weather forecasts by using directly the real time meditative pluviometric data. The aim is to reach a system where the decision-making is realized from reliable data and where the correction of the error is permanent. A first model of control laws was realized and tested with different return-period rainfalls. The gains obtained in rejecting volume vary from 19 to 100 %. The development of a new algorithm was then used to optimize calculation time and thus to overcome the subsequent combinatorial problem in our first approach. Finally, this new algorithm was tested with 16- year-rainfall series. The obtained gains are 40 % of total volume rejected to the natural environment and of 65 % in the number of discharges.

Keywords: automation, optimization, paradigm, RTC

Procedia PDF Downloads 282
3759 Assessment on Communication Students’ Internship Performances from the Employers’ Perspective

Authors: Yesuselvi Manickam, Tan Soon Chin

Abstract:

Internship is a supervised and structured learning experience related to one’s field of study or career goal. Internship allows students to obtain work experience and the opportunity to apply skills learned during university. Internship is a valuable learning experience for students; however, literature on employer assessment is scarce on Malaysian student’s internship experience. This study focuses on employer’s perspective on student’s performances during their three months of internship. The results are based on the descriptive analysis of 45 sets of question gathered from the on-site supervisors of the interns. The survey of 45 on-site supervisor’s feedback was collected through postal mail. It was found that, interns have not met their on-site supervisor’s expectations in many areas. The significance of this study is employer’s assessment on the internship shall be used as feedback to improve on ways how to prepare students for their internship and employments in future.

Keywords: employers perspective, internship, structured learning, student’s performances

Procedia PDF Downloads 278
3758 The Effectiveness of Environmental Policy Instruments for Promoting Renewable Energy Consumption: Command-and-Control Policies versus Market-Based Policies

Authors: Mahmoud Hassan

Abstract:

Understanding the impact of market- and non-market-based environmental policy instruments on renewable energy consumption (REC) is crucial for the design and choice of policy packages. This study aims to empirically investigate the effect of environmental policy stringency index (EPS) and its components on REC in 27 OECD countries over the period from 1990 to 2015, and then use the results to identify what the appropriate environmental policy mix should look like. By relying on the two-step system GMM estimator, we provide evidence that increasing environmental policy stringency as a whole promotes renewable energy consumption in these 27 developed economies. Moreover, policymakers are able, through the market- and non-market-based environmental policy instruments, to increase the use of renewable energy. However, not all of these instruments are effective for achieving this goal. The results indicate that R&D subsidies and trading schemes have a positive and significant impact on REC, while taxes, feed-in tariff and emission standards have not a significant effect. Furthermore, R&D subsidies are more effective than trading schemes for stimulating the use of clean energy. These findings proved to be robust across the three alternative panel techniques used.

Keywords: environmental policy stringency, renewable energy consumption, two-step system-GMM estimation, linear dynamic panel data model

Procedia PDF Downloads 166
3757 Comparative Study of Dose Calculation Accuracy in Bone Marrow Using Monte Carlo Method

Authors: Marzieh Jafarzadeh, Fatemeh Rezaee

Abstract:

Introduction: The effect of ionizing radiation on human health can be effective for genomic integrity and cell viability. It also increases the risk of cancer and malignancy. Therefore, X-ray behavior and absorption dose calculation are considered. One of the applicable tools for calculating and evaluating the absorption dose in human tissues is Monte Carlo simulation. Monte Carlo offers a straightforward way to simulate and integrate, and because it is simple and straightforward, Monte Carlo is easy to use. The Monte Carlo BEAMnrc code is one of the most common diagnostic X-ray simulation codes used in this study. Method: In one of the understudy hospitals, a certain number of CT scan images of patients who had previously been imaged were extracted from the hospital database. BEAMnrc software was used for simulation. The simulation of the head of the device with the energy of 0.09 MeV with 500 million particles was performed, and the output data obtained from the simulation was applied for phantom construction using CT CREATE software. The percentage of depth dose (PDD) was calculated using STATE DOSE was then compared with international standard values. Results and Discussion: The ratio of surface dose to depth dose (D/Ds) in the measured energy was estimated to be about 4% to 8% for bone and 3% to 7% for bone marrow. Conclusion: MC simulation is an efficient and accurate method for simulating bone marrow and calculating the absorbed dose.

Keywords: Monte Carlo, absorption dose, BEAMnrc, bone marrow

Procedia PDF Downloads 192
3756 Physical Activity Patterns during Inpatient Rehabilitation in Patients with Recent Brain Injury

Authors: Nikita Pasricha, Karen Smith, Simone Marshall, Vincent DePaul, Jessica Trier

Abstract:

Understanding that physical activity in rehabilitation programs shapes outcomes in acquired brain injury (ABI) populations is not a new concept. However, there is a void in understanding the physical activity patterns of inpatients in ABI rehabilitation, the trajectory of physical activity recovery, and factors that contribute to the recovery of physical activity over the initial months post-ABI. The purpose of this study was to determine if physical activity patterns vary in people with recent ABI in inpatient rehabilitation. The study also investigated differences in physical activity patterns in ABI patients compared to age-related healthy participants. Results revealed that ABI patients spent approximately 6.7 times longer per day in sedentary postures than in active positions. In comparison, the control group spent only 2.8 times longer in sedentary postures compared to active positions. Patients with ABI took significantly fewer steps than age-matched health control participants. Within the ABI population, patients took 0.78 times fewer steps on weekends compared to weekdays. Participants with greater mobility limitations had a greater difference in WD to WE steps taken. Potential reasons could be from no structured weekend rehabilitation programs, lower availability of staff, or varying schedules. Given that the rehabilitation program is only structured on weekdays, further research to investigate the benefits of structured physical activities like group walking programs on weekends for ABI patients in inpatient rehabilitation programs is warranted.

Keywords: brain, ABI, TBI, rehabilitation

Procedia PDF Downloads 34
3755 Q-Map: Clinical Concept Mining from Clinical Documents

Authors: Sheikh Shams Azam, Manoj Raju, Venkatesh Pagidimarri, Vamsi Kasivajjala

Abstract:

Over the past decade, there has been a steep rise in the data-driven analysis in major areas of medicine, such as clinical decision support system, survival analysis, patient similarity analysis, image analytics etc. Most of the data in the field are well-structured and available in numerical or categorical formats which can be used for experiments directly. But on the opposite end of the spectrum, there exists a wide expanse of data that is intractable for direct analysis owing to its unstructured nature which can be found in the form of discharge summaries, clinical notes, procedural notes which are in human written narrative format and neither have any relational model nor any standard grammatical structure. An important step in the utilization of these texts for such studies is to transform and process the data to retrieve structured information from the haystack of irrelevant data using information retrieval and data mining techniques. To address this problem, the authors present Q-Map in this paper, which is a simple yet robust system that can sift through massive datasets with unregulated formats to retrieve structured information aggressively and efficiently. It is backed by an effective mining technique which is based on a string matching algorithm that is indexed on curated knowledge sources, that is both fast and configurable. The authors also briefly examine its comparative performance with MetaMap, one of the most reputed tools for medical concepts retrieval and present the advantages the former displays over the latter.

Keywords: information retrieval, unified medical language system, syntax based analysis, natural language processing, medical informatics

Procedia PDF Downloads 111