Search results for: JavaBean Components
88 Seismic Analysis of a S-Curved Viaduct using Stick and Finite Element Models
Authors: Sourabh Agrawal, Ashok K. Jain
Abstract:
Stick models are widely used in studying the behaviour of straight as well as skew bridges and viaducts subjected to earthquakes while carrying out preliminary studies. The application of such models to highly curved bridges continues to pose challenging problems. A viaduct proposed in the foothills of the Himalayas in Northern India is chosen for the study. It is having 8 simply supported spans @ 30 m c/c. It is doubly curved in horizontal plane with 20 m radius. It is inclined in vertical plane as well. The superstructure consists of a box section. Three models have been used: a conventional stick model, an improved stick model and a 3D finite element model. The improved stick model is employed by making use of body constraints in order to study its capabilities. The first 8 frequencies are about 9.71% away in the latter two models. Later the difference increases to 80% in 50th mode. The viaduct was subjected to all three components of the El Centro earthquake of May 1940. The numerical integration was carried out using the Hilber- Hughes-Taylor method as implemented in SAP2000. Axial forces and moments in the bridge piers as well as lateral displacements at the bearing levels are compared for the three models. The maximum difference in the axial forces and bending moments and displacements vary by 25% between the improved and finite element model. Whereas, the maximum difference in the axial forces, moments, and displacements in various sections vary by 35% between the improved stick model and equivalent straight stick model. The difference for torsional moment was as high as 75%. It is concluded that the stick model with body constraints to model the bearings and expansion joints is not desirable in very sharp S curved viaducts even for preliminary analysis. This model can be used only to determine first 10 frequency and mode shapes but not for member forces. A 3D finite element analysis must be carried out for meaningful results.Keywords: Bearing, body constraint, box girder, curved viaduct, expansion joint, finite element, link element, seismic, stick model, time history analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 235887 Effective Planning of Public Transportation Systems: A Decision Support Application
Authors: Ferdi Sönmez, Nihal Yorulmaz
Abstract:
Decision making on the true planning of the public transportation systems to serve potential users is a must for metropolitan areas. To take attraction of travelers to projected modes of transport, adequately fair overall travel times should be provided. In this fashion, other benefits such as lower traffic congestion, road safety and lower noise and atmospheric pollution may be earned. The congestion which comes with increasing demand of public transportation is becoming a part of our lives and making residents’ life difficult. Hence, regulations should be done to reduce this congestion. To provide a constructive and balanced regulation in public transportation systems, right stations should be located in right places. In this study, it is aimed to design and implement a Decision Support System (DSS) Application to determine the optimal bus stop places for public transport in Istanbul which is one of the biggest and oldest cities in the world. Required information is gathered from IETT (Istanbul Electricity, Tram and Tunnel) Enterprises which manages all public transportation services in Istanbul Metropolitan Area. By using the most real-like values, cost assignments are made. The cost is calculated with the help of equations produced by bi-level optimization model. For this study, 300 buses, 300 drivers, 10 lines and 110 stops are used. The user cost of each station and the operator cost taken place in lines are calculated. Some components like cost, security and noise pollution are considered as significant factors affecting the solution of set covering problem which is mentioned for identifying and locating the minimum number of possible bus stops. Preliminary research and model development for this study refers to previously published article of the corresponding author. Model results are represented with the intent of decision support to the specialists on locating stops effectively.
Keywords: User cost, bi-level optimization model, decision support, operator cost, transportation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 72886 Production Process for Diesel Fuel Components Polyoxymethylene Dimethyl Ethers from Methanol and Formaldehyde Solution
Authors: Xiangjun Li, Huaiyuan Tian, Wujie Zhang, Dianhua Liu
Abstract:
Polyoxymethylene dimethyl ethers (PODEn) as clean diesel additive can improve the combustion efficiency and quality of diesel fuel and alleviate the problem of atmospheric pollution. Considering synthetic routes, PODE production from methanol and formaldehyde is regarded as the most economical and promising synthetic route. However, methanol used for synthesizing PODE can produce water, which causes the loss of active center of catalyst and hydrolysis of PODEn in the production process. Macroporous strong acidic cation exchange resin catalyst was prepared, which has comparative advantages over other common solid acid catalysts in terms of stability and catalytic efficiency for synthesizing PODE. Catalytic reactions were carried out under 353 K, 1 MPa and 3mL·gcat-1·h-1 in a fixed bed reactor. Methanol conversion and PODE3-6 selectivity reached 49.91% and 23.43%, respectively. Catalyst lifetime evaluation showed that resin catalyst retained its catalytic activity for 20 days without significant changes and catalytic activity of completely deactivated resin catalyst can basically return to previous level by simple acid regeneration. The acid exchange capacities of original and deactivated catalyst were 2.5191 and 0.0979 mmol·g-1, respectively, while regenerated catalyst reached 2.0430 mmol·g-1, indicating that the main reason for resin catalyst deactivation is that Brønsted acid sites of original resin catalyst were temporarily replaced by non-hydrogen ion cations. A separation process consisting of extraction and distillation for PODE3-6 product was designed for separation of water and unreacted formaldehyde from reactive mixture and purification of PODE3-6, respectively. The concentration of PODE3-6 in final product can reach up to 97%. These results indicate that the scale-up production of PODE3-6 from methanol and formaldehyde solution is feasible.
Keywords: Inactivation, polyoxymethylene dimethyl ethers, separation process, sulfonic cation exchange resin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90285 Relocation of Livestocks in Rural of Canakkale Province Using Remote Sensing and GIS
Authors: Melis Inalpulat, Levent Genc, Unal Kizil, Tugce Civelek
Abstract:
Livestock production is one of the most important components of rural economy. Due to the urban expansion, rural areas close to expanding cities transform into urban districts during the time. However, the legislations have some restrictions related to livestock farming in such administrative units since they tend to create environmental concerns like odor problems resulted from excessive manure production. Therefore, the existing animal operations should be moved from the settlement areas. This paper was focused on determination of suitable lands for livestock production in Canakkale province of Turkey using remote sensing (RS) data and GIS techniques. To achieve the goal, Formosat 2 and Landsat 8 imageries, Aster DEM, and 1:25000 scaled soil maps, village boundaries, and village livestock inventory records were used. The study was conducted using suitability analysis which evaluates the land in terms of limitations and potentials, and suitability range was categorized as Suitable (S) and Non-Suitable (NS). Limitations included the distances from main and crossroads, water resources and settlements, while potentials were appropriate values for slope, land use capability and land use land cover status. Village-based S land distribution results were presented, and compared with livestock inventories. Results showed that approximately 44230 ha area is inappropriate because of the distance limitations for roads and etc. (NS). Moreover, according to LULC map, 71052 ha area consists of forests, olive and other orchards, and thus, may not be suitable for building such structures (NS). In comparison, it was found that there are a total of 1228 ha S lands within study area. The village-based findings indicated that, in some villages livestock production continues on NS areas. Finally, it was suggested that organized livestock zones may be constructed to serve in more than one village after the detailed analysis complemented considering also political decisions, opinion of the local people, etc.Keywords: GIS, livestock, LULC, remote sensing, suitable lands.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 132484 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as Independent Component Analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.
Keywords: Independent Component Analysis, Resting State Network, refractory epilepsy, rsfMRI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29183 The Opinions of Nursing Students Regarding Humanized Care through Volunteer Activities at Boromrajonani College of Nursing, Chonburi
Authors: P. Phenpun, S. Wareewan
Abstract:
This qualitative study aimed to describe the opinions in relation to humanized care emerging from the volunteer activities of nursing students at Boromarajonani College of Nursing, Chonburi, Thailand. One hundred and twenty-seven second-year nursing students participated in this study. The volunteer activity model was composed of preparation, implementation, and evaluation through a learning log, in which students were encouraged to write their daily activities after completing practical training at the healthcare center. The preparation content included three main categories: service minded, analytical thinking, and client participation. The preparation process took over three days that accumulates up to 20 hours only. The implementation process was held over 10 days, but with a total of 70 hours only, with participants taking part in volunteer work activities at a healthcare center. A learning log was used for evaluation and data were analyzed using content analysis. The findings were as follows. With service minded, there were two subcategories that emerged from volunteer activities, which were service minded towards patients and within themselves. There were three categories under service minded towards patients, which were rapport, compassion, and empathy service behaviors, and there were four categories under service minded within themselves, which were self-esteem, self-value, management potential, and preparedness in providing good healthcare services. In line with analytical thinking, there were two components of analytical thinking, which were analytical skill for their works and analytical thinking for themselves. There were four subcategories under analytical thinking for their works, which were evidence based thinking, real situational thinking, cause analysis thinking, and systematic thinking, respectively. There were four subcategories under analytical thinking for themselves, which were comparative between themselves, towards their clients that leads to the changing of their service behaviors, open-minded thinking, modernized thinking, and verifying both verbal and non-verbal cues. Lastly, there were three categories under participation, which were mutual rapport relationship; reconsidering client’s needs services and providing useful health care information.
Keywords: Humanized care service, volunteer activity, nursing student, and learning log.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 157182 Organization of the Purchasing Function for Innovation
Authors: Jasna Prester, Ivana Rašić Bakarić, Božidar Matijević
Abstract:
Innovations not only contribute to competitiveness of the company but have also positive effects on revenues. On average, product innovations account to 14 percent of companies’ sales. Innovation management has substantially changed during the last decade, because of growing reliance on external partners. As a consequence, a new task for purchasing arises, as firms need to understand which suppliers actually do have high potential contributing to the innovativeness of the firm and which do not. Proper organization of the purchasing function is important since for the majority of manufacturing companies deal with substantial material costs which pass through the purchasing function. In the past the purchasing function was largely seen as a transaction-oriented, clerical function but today purchasing is the intermediate with supply chain partners contributing to innovations, be it product or process innovations. Therefore, purchasing function has to be organized differently to enable firm innovation potential. However, innovations are inherently risky. There are behavioral risk (that some partner will take advantage of the other party), technological risk in terms of complexity of products and processes of manufacturing and incoming materials and finally market risks, which in fact judge the value of the innovation. These risks are investigated in this work. Specifically, technological risks which deal with complexity of the products, and processes will be investigated more thoroughly. Buying components or such high edge technologies necessities careful investigation of technical features and therefore is usually conducted by a team of experts. Therefore it is hypothesized that higher the technological risk, higher will be the centralization of the purchasing function as an interface with other supply chain members. Main contribution of this research lies is in the fact that analysis was performed on a large data set of 1493 companies, from 25 countries collected in the GMRG 4 survey. Most analyses of purchasing function are done by case study analysis of innovative firms. Therefore this study contributes with empirical evaluations that can be generalized.
Keywords: Purchasing function organization, innovation, technological risk, GMRG 4 survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 372181 Modeling Stress-Induced Regulatory Cascades with Artificial Neural Networks
Authors: Maria E. Manioudaki, Panayiota Poirazi
Abstract:
Yeast cells live in a constantly changing environment that requires the continuous adaptation of their genomic program in order to sustain their homeostasis, survive and proliferate. Due to the advancement of high throughput technologies, there is currently a large amount of data such as gene expression, gene deletion and protein-protein interactions for S. Cerevisiae under various environmental conditions. Mining these datasets requires efficient computational methods capable of integrating different types of data, identifying inter-relations between different components and inferring functional groups or 'modules' that shape intracellular processes. This study uses computational methods to delineate some of the mechanisms used by yeast cells to respond to environmental changes. The GRAM algorithm is first used to integrate gene expression data and ChIP-chip data in order to find modules of coexpressed and co-regulated genes as well as the transcription factors (TFs) that regulate these modules. Since transcription factors are themselves transcriptionally regulated, a three-layer regulatory cascade consisting of the TF-regulators, the TFs and the regulated modules is subsequently considered. This three-layer cascade is then modeled quantitatively using artificial neural networks (ANNs) where the input layer corresponds to the expression of the up-stream transcription factors (TF-regulators) and the output layer corresponds to the expression of genes within each module. This work shows that (a) the expression of at least 33 genes over time and for different stress conditions is well predicted by the expression of the top layer transcription factors, including cases in which the effect of up-stream regulators is shifted in time and (b) identifies at least 6 novel regulatory interactions that were not previously associated with stress-induced changes in gene expression. These findings suggest that the combination of gene expression and protein-DNA interaction data with artificial neural networks can successfully model biological pathways and capture quantitative dependencies between distant regulators and downstream genes.
Keywords: gene modules, artificial neural networks, yeast, stress
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 146580 Computer Models of the Vestibular Head Tilt Response, and Their Relationship to EVestG and Meniere's Disease
Authors: Daniel Heibert, Brian Lithgow, Kerry Hourigan
Abstract:
This paper attempts to explain response components of Electrovestibulography (EVestG) using a computer simulation of a three-canal model of the vestibular system. EVestG is a potentially new diagnostic method for Meniere's disease. EVestG is a variant of Electrocochleography (ECOG), which has been used as a standard method for diagnosing Meniere's disease - it can be used to measure the SP/AP ratio, where an SP/AP ratio greater than 0.4-0.5 is indicative of Meniere-s Disease. In EVestG, an applied head tilt replaces the acoustic stimulus of ECOG. The EVestG output is also an SP/AP type plot, where SP is the summing potential, and AP is the action potential amplitude. AP is thought of as being proportional to the size of a population of afferents in an excitatory neural firing state. A simulation of the fluid volume displacement in the vestibular labyrinth in response to various types of head tilts (ipsilateral, backwards and horizontal rotation) was performed, and a simple neural model based on these simulations developed. The simple neural model shows that the change in firing rate of the utricle is much larger in magnitude than the change in firing rates of all three semi-circular canals following a head tilt (except in a horizontal rotation). The data suggests that the change in utricular firing rate is a minimum 2-3 orders of magnitude larger than changes in firing rates of the canals during ipsilateral/backward tilts. Based on these results, the neural response recorded by the electrode in our EVestG recordings is expected to be dominated by the utricle in ipsilateral/backward tilts (It is important to note that the effect of the saccule and efferent signals were not taken into account in this model). If the utricle response dominates the EVestG recordings as the modeling results suggest, then EVestG has the potential to diagnose utricular hair cell damage due to a viral infection (which has been cited as one possible cause of Meniere's Disease).
Keywords: Diagnostic, endolymph hydrops, Meniere's disease, modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 151779 Simulation of Concrete Wall Subjected to Airblast by Developing an Elastoplastic Spring Model in Modelica Modelling Language
Authors: Leo Laine, Morgan Johansson
Abstract:
To meet the civilizations future needs for safe living and low environmental footprint, the engineers designing the complex systems of tomorrow will need efficient ways to model and optimize these systems for their intended purpose. For example, a civil defence shelter and its subsystem components needs to withstand, e.g. airblast and ground shock from decided design level explosion which detonates with a certain distance from the structure. In addition, the complex civil defence shelter needs to have functioning air filter systems to protect from toxic gases and provide clean air, clean water, heat, and electricity needs to also be available through shock and vibration safe fixtures and connections. Similar complex building systems can be found in any concentrated living or office area. In this paper, the authors use a multidomain modelling language called Modelica to model a concrete wall as a single degree of freedom (SDOF) system with elastoplastic properties with the implemented option of plastic hardening. The elastoplastic model was developed and implemented in the open source tool OpenModelica. The simulation model was tested on the case with a transient equivalent reflected pressure time history representing an airblast from 100 kg TNT detonating 15 meters from the wall. The concrete wall is approximately regarded as a concrete strip of 1.0 m width. This load represents a realistic threat on any building in a city like area. The OpenModelica model results were compared with an Excel implementation of a SDOF model with an elastic-plastic spring using simple fixed timestep central difference solver. The structural displacement results agreed very well with each other when it comes to plastic displacement magnitude, elastic oscillation displacement, and response times.
Keywords: Airblast from explosives, elastoplastic spring model, Modelica modelling language, SDOF, structural response of concrete structure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 90778 An Anthropometric Index Capable of Differentiating Morbid Obesity from Obesity and Metabolic Syndrome in Children
Authors: Mustafa M. Donma
Abstract:
Circumference measurements may give meaningful information about the varying stages of obesity. Some formulas may be derived from a number of body circumference measurements to estimate body fat. Waist (WC), hip (HC) and neck (NC) circumferences are currently the most frequently used measurements. The aim of this study was to develop a formula derived from these three anthropometric measurements for the differential diagnosis of morbid obesity with and without metabolic syndrome (MetS), MOMetS+ and MOMetS-, respectively. 187 children were recruited from the pediatrics outpatient clinic of Tekirdag Namik Kemal University, Faculty of Medicine. Signed informed consent forms were taken from the participants. The study was carried out according to the Helsinki Declaration. The study protocol was approved by the institutional non-interventional ethics committee of Tekirdag Namik Kemal University Medical Faculty. The study population was divided into four groups as normal-body mass index (N-BMI) (n = 35), obese (OB) (n = 44), morbid obese (MO) (n = 75) and MetS (n = 33). Age- and gender-adjusted BMI percentile values were used for the classification of groups. The children in MetS group were selected based upon the nature of the MetS components described as MetS criteria. Anthropometric measurements, laboratory analysis and statistical evaluation confined to study population were performed. BMI values were calculated. A circumference index, advanced Donma circumference index (ADCI) was presented as WC*HC/NC. The statistical significance degree was chosen as p < 0.05. BMI values were 17.7 ± 2.8, 24.5 ± 3.3, 28.8 ± 5.7, 31.4 ± 8.0 kg/m2, for N-BMI, OB, MO, MetS groups (p = 0.001), respectively. An increasing trend from N-BMI to MetS was observed. However, the increase in MetS group compared to MO group was not significant. For the new index, significant differences were obtained between N-BMI and OB, MO, MetS groups (p = 0.001). A significant difference between MO and MetS groups was detected (p = 0.043). A significant correlation was found between BMI and ADCI. In conclusion, in spite of the strong correlation between BMI and ADCI values obtained when all groups were considered, ADCI, but not BMI, was the index, which was capable of differentiating cases with morbid obesity from cases with morbid obesity and MetS.
Keywords: Anthropometry, body mass index, childhood obesity, body circumference, metabolic syndrome.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6277 Lean Production to Increase Reproducibility and Work Safety in the Laser Beam Melting Process Chain
Authors: C. Bay, A. Mahr, H. Groneberg, F. Döpper
Abstract:
Additive Manufacturing processes are becoming increasingly established in the industry for the economic production of complex prototypes and functional components. Laser beam melting (LBM), the most frequently used Additive Manufacturing technology for metal parts, has been gaining in industrial importance for several years. The LBM process chain – from material storage to machine set-up and component post-processing – requires many manual operations. These steps often depend on the manufactured component and are therefore not standardized. These operations are often not performed in a standardized manner, but depend on the experience of the machine operator, e.g., levelling of the build plate and adjusting the first powder layer in the LBM machine. This lack of standardization limits the reproducibility of the component quality. When processing metal powders with inhalable and alveolar particle fractions, the machine operator is at high risk due to the high reactivity and the toxic (e.g., carcinogenic) effect of the various metal powders. Faulty execution of the operation or unintentional omission of safety-relevant steps can impair the health of the machine operator. In this paper, all the steps of the LBM process chain are first analysed in terms of their influence on the two aforementioned challenges: reproducibility and work safety. Standardization to avoid errors increases the reproducibility of component quality as well as the adherence to and correct execution of safety-relevant operations. The corresponding lean method 5S will therefore be applied, in order to develop approaches in the form of recommended actions that standardize the work processes. These approaches will then be evaluated in terms of ease of implementation and their potential for improving reproducibility and work safety. The analysis and evaluation showed that sorting tools and spare parts as well as standardizing the workflow are likely to increase reproducibility. Organizing the operational steps and production environment decreases the hazards of material handling and consequently improves work safety.
Keywords: Additive manufacturing, lean production, reproducibility, work safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 84676 Material Concepts and Processing Methods for Electrical Insulation
Authors: R. Sekula
Abstract:
Epoxy composites are broadly used as an electrical insulation for the high voltage applications since only such materials can fulfill particular mechanical, thermal, and dielectric requirements. However, properties of the final product are strongly dependent on proper manufacturing process with minimized material failures, as too large shrinkage, voids and cracks. Therefore, application of proper materials (epoxy, hardener, and filler) and process parameters (mold temperature, filling time, filling velocity, initial temperature of internal parts, gelation time), as well as design and geometric parameters are essential features for final quality of the produced components. In this paper, an approach for three-dimensional modeling of all molding stages, namely filling, curing and post-curing is presented. The reactive molding simulation tool is based on a commercial CFD package, and include dedicated models describing viscosity and reaction kinetics that have been successfully implemented to simulate the reactive nature of the system with exothermic effect. Also a dedicated simulation procedure for stress and shrinkage calculations, as well as simulation results are presented in the paper. Second part of the paper is dedicated to recent developments on formulations of functional composites for electrical insulation applications, focusing on thermally conductive materials. Concepts based on filler modifications for epoxy electrical composites have been presented, including the results of the obtained properties. Finally, having in mind tough environmental regulations, in addition to current process and design aspects, an approach for product re-design has been presented focusing on replacement of epoxy material with the thermoplastic one. Such “design-for-recycling” method is one of new directions associated with development of new material and processing concepts of electrical products and brings a lot of additional research challenges. For that, one of the successful products has been presented to illustrate the presented methodology.
Keywords: Curing, epoxy insulation, numerical simulations, recycling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 163675 Fire Resistance of High Alumina Cement and Slag Based Ultra High Performance Fibre-Reinforced Cementitious Composites
Authors: A. Q. Sobia, M. S. Hamidah, I. Azmi, S. F. A. Rafeeqi
Abstract:
Fibre-reinforced polymer (FRP) strengthened reinforced concrete (RC) structures are susceptible to intense deterioration when exposed to elevated temperatures, particularly in the incident of fire. FRP has the tendency to lose bond with the substrate due to the low glass transition temperature of epoxy; the key component of FRP matrix. In the past few decades, various types of high performance cementitious composites (HPCC) were explored for the protection of RC structural members against elevated temperature. However, there is an inadequate information on the influence of elevated temperature on the ultra high performance fibre-reinforced cementitious composites (UHPFRCC) containing ground granulated blast furnace slag (GGBS) as a replacement of high alumina cement (HAC) in conjunction with hybrid fibres (basalt and polypropylene fibres), which could be a prospective fire resisting material for the structural components. The influence of elevated temperatures on the compressive as well as flexural strength of UHPFRCC, made of HAC-GGBS and hybrid fibres, were examined in this study. Besides control sample (without fibres), three other samples, containing 0.5%, 1% and 1.5% of basalt fibres by total weight of mix and 1 kg/m3 of polypropylene fibres, were prepared and tested. Another mix was also prepared with only 1 kg/m3 of polypropylene fibres. Each of the samples were retained at ambient temperature as well as exposed to 400, 700 and 1000 °C followed by testing after 28 and 56 days of conventional curing. Investigation of results disclosed that the use of hybrid fibres significantly helped to improve the ambient temperature compressive and flexural strength of UHPFRCC, which was found to be 80 and 14.3 MPa respectively. However, the optimum residual compressive strength was marked by UHPFRCC-CP (with polypropylene fibres only), equally after both curing days (28 and 56 days), i.e. 41%. In addition, the utmost residual flexural strength, after 28 and 56 days of curing, was marked by UHPFRCC– CP and UHPFRCC– CB2 (1 kg/m3 of PP fibres + 1% of basalt fibres) i.e. 39% and 48.5% respectively.
Keywords: Fibre reinforced polymer materials, ground granulated blast furnace slag, high-alumina cement, hybrid fibres.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 113974 Tagged Grid Matching Based Object Detection in Wavelet Neural Network
Authors: R. Arulmurugan, P. Sengottuvelan
Abstract:
Object detection using Wavelet Neural Network (WNN) plays a major contribution in the analysis of image processing. Existing cluster-based algorithm for co-saliency object detection performs the work on the multiple images. The co-saliency detection results are not desirable to handle the multi scale image objects in WNN. Existing Super Resolution (SR) scheme for landmark images identifies the corresponding regions in the images and reduces the mismatching rate. But the Structure-aware matching criterion is not paying attention to detect multiple regions in SR images and fail to enhance the result percentage of object detection. To detect the objects in the high-resolution remote sensing images, Tagged Grid Matching (TGM) technique is proposed in this paper. TGM technique consists of the three main components such as object determination, object searching and object verification in WNN. Initially, object determination in TGM technique specifies the position and size of objects in the current image. The specification of the position and size using the hierarchical grid easily determines the multiple objects. Second component, object searching in TGM technique is carried out using the cross-point searching. The cross out searching point of the objects is selected to faster the searching process and reduces the detection time. Final component performs the object verification process in TGM technique for identifying (i.e.,) detecting the dissimilarity of objects in the current frame. The verification process matches the search result grid points with the stored grid points to easily detect the objects using the Gabor wavelet Transform. The implementation of TGM technique offers a significant improvement on the multi-object detection rate, processing time, precision factor and detection accuracy level.
Keywords: Object Detection, Cross-point Searching, Wavelet Neural Network, Object Determination, Gabor Wavelet Transform, Tagged Grid Matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 196573 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction
Authors: Talal Alsulaiman, Khaldoun Khashanah
Abstract:
In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent’s attributes. Also, the influence of social networks in the developing of agents interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.Keywords: Artificial stock markets, agent based simulation, bounded rationality, behavioral finance, artificial neural network, interaction, scale-free networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252872 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities
Authors: Kung-Jen Tu, Danny Vernatha
Abstract:
To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.Keywords: Sensor, electricity sub-meters, database, energy anomaly detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 228471 Automated Transformation of 3D Point Cloud to Building Information Model: Leveraging Algorithmic Modeling for Efficient Reconstruction
Authors: Radul Shishkov, Petar Penchev
Abstract:
The digital era has revolutionized architectural practices, with Building Information Modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research presents a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data — a collection of data points in space, typically produced by 3D scanners — into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historical preservation.
Keywords: Algorithmic modeling, Building Information Modeling, point cloud, reconstruction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank
Authors: Jalal Haghighat Monfared, Zahra Akbari
Abstract:
Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.
Keywords: Business intelligence, business intelligence capability, decision making, decision quality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138169 Structural Analysis of Aircraft Wing Using Finite Element Analysis
Authors: Manish Kumar, Pradeep Rout Aditya Kumar Jha, Pankaj Gupta
Abstract:
Wings are structural components of an aeroplane that are used to produce lift while the aircraft is in flight. The initial assault angle of the wing is definite. Due to the pressure difference at the top and bottom surfaces of the wing, lift force is produced when the flow passes over it. This paper explains the fundamental concept of the structural behaviour of a wing threatened by flowing loads during the voyage. The study comprises the use of concepts and analysis with the help of finite element analysis. Wing assembly is the first stage of wing model and design, which are determined by fascinating factual features. The basic gathering wing consists of a thin membrane, two poles, and several ribs. It has two spars, the major spar and the secondary spar. Here, NACA 23015 is selected as the standard model for all types of aerofoil structures since it is more akin to the custom aerofoil utilized in large aircraft, specifically the Airbus A320. Two rods mostly endure the twisting moment and trim strength, which is finished with titanium contamination to ensure enough inflexibility. The covering and wing spars are made of aluminium amalgam to lessen the structural heaviness. Following that, a static underlying examination is performed, and the general contortion, equivalent flexible strain, and comparing Von-Mises pressure are obtained to aid in investigations of the mechanical behaviour of the wing. Moreover, the modular examination is being upheld to decide the normal pace of repetition as well as the modular state of the three orders, which are obtained through the pre-stress modular investigation. The findings of the modular investigation assist engineers in reducing their excitement about regular events and turning away the wing from the whirlwind. Based on the findings of the study, planners can prioritise union and examination of the pressure mindfulness range and tremendous twisting region. All in all, the entertainment outcomes demonstrate that the game plan is feasible and further develop the data grade of the lifting surface.
Keywords: FEM, Airbus, NACA, modulus of elasticity, aircraft wing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56168 The Integration Process of Non-EU Citizens in Luxembourg: From an Empirical Approach Toward a Theoretical Model
Authors: Angela Odero, Chrysoula Karathanasi, Michèle Baumann
Abstract:
Integration of foreign communities has been a forefront issue in Luxembourg for some time now. The country’s continued progress depends largely on the successful integration of immigrants. The aim of our study was to analyze factors which intervene in the course of integration of Non-EU citizens through the discourse of Non-EU citizens residing in Luxembourg, who have signed the Welcome and Integration Contract (CAI). The two-year contract offers integration services to assist foreigners in getting settled in the country. Semi-structured focus group discussions with 50 volunteers were held in English, French, Spanish, Serbo-Croatian or Chinese. Participants were asked to talk about their integration experiences. Recorded then transcribed, the transcriptions were analyzed with the help of NVivo 10, a qualitative analysis software. A systematic and reiterative analysis of decomposing and reconstituting was realized through (1) the identification of predetermined categories (difficulties, challenges and integration needs) (2) initial coding – the grouping together of similar ideas (3) axial coding – the regrouping of items from the initial coding in new ways in order to create sub-categories and identify other core dimensions. Our results show that intervening factors include language acquisition, professional career and socio-cultural activities or events. Each of these factors constitutes different components whose weight shifts from person to person and from situation to situation. Connecting these three emergent factors are two elements essential to the success of the immigrant’s integration – the role of time and deliberate effort from the immigrants, the community, and the formal institutions charged with helping immigrants integrate. We propose a theoretical model where the factors described may be classified in terms of how they predispose, facilitate, and / or reinforce the process towards a successful integration. Measures currently in place propose one size fits all programs yet integrative measures which target the family unit and those customized to target groups based on their needs would work best.
Keywords: Integration, Integration Services, Non-EU citizens, Qualitative Analysis, Third Country Nationals.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 112667 International E-Learning for Assuring Ergonomic Working Conditions of Orthopaedic Surgeons: First Research Outcomes from Train4OrthoMIS
Authors: J. Bartnicka, J. A. Piedrabuena, R. Portilla, L. Moyano - Cuevas, J. B. Pagador, P. Augat, J. Tokarczyk, F. M. Sánchez Margallo
Abstract:
Orthopaedic surgeries are characterized by a high degree of complexity. This is reflected by four main groups of resources: 1) surgical team which is consisted of people with different competencies, educational backgrounds and positions; 2) information and knowledge about medical and technical aspects of surgery; 3) medical equipment including surgical tools and materials; 4) space infrastructure which is important from an operating room layout point of view. These all components must be integrated and build a homogeneous organism for achieving an efficient and ergonomically correct surgical workflow. Taking this as a background, there was formulated a concept of international project, called “Online Vocational Training course on ergonomics for orthopaedic Minimally Invasive” (Train4OrthoMIS), which aim is to develop an e-learning tool available in 4 languages (English, Spanish, Polish and German). In the article, there is presented the first project research outcomes focused on three aspects: 1) ergonomic needs of surgeons who work in hospitals around different European countries, 2) the concept of structure of e-learning course, 3) the definition of tools and methods for knowledge assessment adjusted to users’ expectation. The methodology was based on the expert panels and two types of surveys: 1) on training needs, 2) on evaluation and self-assessment preferences. The major findings of the study allowed describing the subjects of four training modules and learning sessions. According to peoples’ opinion there were defined most expected test methods which are single choice test and right after quizzes: “True or False” and “Link elements” The first project outcomes confirmed the necessity of creating a universal training tool for orthopaedic surgeons regardless of the country in which they work. Because of limited time that surgeons have, the e-learning course should be strictly adjusted to their expectation in order to be useful.Keywords: International e-learning, ergonomics, orthopaedic surgery, Train4OrthoMIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 143966 Association of Phosphorus and Magnesium with Fat Indices in Children with Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Metabolic syndrome (MetS) is a disease associated with obesity. It is a complicated clinical problem possibly affecting body composition as well as macrominerals. These parameters gain further attention particularly in pediatric population. The aim of this study is to investigate the amount of discrete body composition fractions in groups that differ in the severity of obesity. Also, the possible associations with calcium (Ca), phosphorus (P), magnesium (Mg) will be examined. The study population was divided into four groups. 28, 29, 34 and 34 children were involved in Group 1 (healthy), Group 2 (obese), Group 3 (morbid obese) and Group 4 (MetS), respectively. Institutional Ethical Committee approved the study protocol. Informed consent forms were obtained from the parents of the participants. The classification of obese groups was performed based upon the recommendations of World Health Organization. MetS components were defined. Serum Ca, P, Mg concentrations were measured. Within the scope of body composition, fat mass, fat-free mass, protein mass, mineral mass were determined by body composition monitor using bioelectrical impedance analysis technology. Weight, height, waist circumference, hip circumference, head circumference and neck circumference values were recorded. Body mass index, diagnostic obesity notation model assessment index, fat mass index and fat-free mass index values were calculated. Data were statistically evaluated and interpreted. There was no statistically significant difference among the groups in terms of Ca and P concentrations. Magnesium concentrations differed between Group 1 and Group 4. Strong negative correlations were detected between P as well as Mg and fat mass index as well as diagnostic obesity notation model assessment index in Group 4, which comprised morbid obese children with MetS. This study emphasized unique associations of P and Mg minerals with diagnostic obesity notation model assessment index and fat mass index during the evaluation of morbid obese children with MetS. It was also concluded that diagnostic obesity notation model assessment index and fat mass index were more proper indices in comparison with body mass index and fat-free mass index for the purpose of defining body composition in children.
Keywords: Children, fat mass, fat-free mass, macrominerals, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 46665 Security Model of a Unified Communications and Integrated Collaborations System in the Health Sector Environment of Developing Countries: A Case of Uganda
Authors: Excellence Favor, Bakari M. M. Mwinyiwiwa
Abstract:
Access to information holds the key to the empowerment of everybody despite where they are living. This research has been carried out in respect of the people living in developing countries, considering their plight and complex geographical, demographic, social-economic conditions surrounding the areas they live, which hinder access to information and of professionals providing services such as medical workers, which has led to high death rates and development stagnation. Research on Unified Communications and Integrated Collaborations (UCIC) system in the health sector of developing countries aims at creating a possible solution of bridging the digital canyon among the communities. The system is meant to deliver services in a seamless manner to assist health workers situated anywhere to be accessed easily and access information which will enhance service delivery. The proposed UCIC provides the most immersive telepresence experience for one-to-one or many-to-many meetings. Extending to locations anywhere in the world, the transformative platform delivers Ultra-low operating costs through the use of general purpose networks and using special lenses and track systems. The essence of this study is to create a security model for the deployment of the UCIC system in the health sector of developing countries. The model approach used for building the UCIC system security carefully considers the specific requirements for the health sector environment organization such as data centre, national, regional and district hospitals, and health centers IV, III, II and I and then builds the single best possible secure network to meet their needs. The security model demonstrates on how the components of the UCIC system will be protected physically and logically in the health sector environment. The UCIC system once adopted and implemented correctly will bring enhancement to the speed and quality of services offered by health workers. The capacities of UCIC will help health workers shorten decision cycles, accelerate service delivery and save lives by speeding access to information and by making it possible for all health workers and patients to collaborate ubiquitously.
Keywords: Developing Countries, Health Sector Environment, Security, Unified Communications and Integrated Collaborations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152964 Impact of Fischer-Tropsch Wax on Ethylene Vinyl Acetate/Waste Crumb Rubber Modified Bitumen: An Energy-Sustainability Nexus
Authors: Keith D. Nare, Mohau J. Phiri, James Carson, Chris D. Woolard, Shanganyane P. Hlangothi
Abstract:
In an energy-intensive world, minimizing energy consumption is paramount to cost saving and reducing the carbon footprint. Improving mixture procedures utilizing warm mix additive Fischer-Tropsch (FT) wax in ethylene vinyl acetate (EVA) and modified bitumen highlights a greener and sustainable approach to modified bitumen. In this study, the impact of FT wax on optimized EVA/waste crumb rubber modified bitumen is assayed with a maximum loading of 2.5%. The rationale of the FT wax loading is to maintain the original maximum loading of EVA in the optimized mixture. The phase change abilities of FT wax enable EVA co-crystallization with the support of the elastomeric backbone of crumb rubber. Less than 1% loading of FT wax worked in the EVA/crumb rubber modified bitumen energy-sustainability nexus. Response surface methodology approach to the mixture design is implemented amongst the different loadings of FT wax, EVA for a consistent amount of crumb rubber and bitumen. Rheological parameters (complex shear modulus, phase angle and rutting parameter) were the factors used as performance indicators of the different optimized mixtures. The low temperature chemistry of the optimized mixtures is analyzed using elementary beam theory and the elastic-viscoelastic correspondence principle. Master curves and black space diagrams are developed and used to predict age-induced cracking of the different long term aged mixtures. Modified binder rheology reveals that the strain response is not linear and that there is substantial re-arrangement of polymer chains as stress is increased, this is based on the age state of the mixture and the FT wax and EVA loadings. Dominance of individual effects is evident over effects of synergy in co-interaction of EVA and FT wax. All-inclusive FT wax and EVA formulations were best optimized in mixture 4 with mixture 7 reflecting increase in ease of workability. Findings show that interaction chemistry of bitumen, crumb rubber EVA, and FT wax is first and second order in all cases involving individual contributions and co-interaction amongst the components of the mixture.
Keywords: Bitumen, crumb rubber, ethylene vinyl acetate, FT wax.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 94663 The Evaluation of Complete Blood Cell Count-Based Inflammatory Markers in Pediatric Obesity and Metabolic Syndrome
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Obesity is defined as a severe chronic disease characterized by a low-grade inflammatory state. Therefore, inflammatory markers gained utmost importance during the evaluation of obesity and metabolic syndrome (MetS), a disease characterized by central obesity, elevated blood pressure, increased fasting blood glucose and elevated triglycerides or reduced high density lipoprotein cholesterol (HDL-C) values. Some inflammatory markers based upon complete blood cell count (CBC) are available. In this study, it was questioned which inflammatory marker was the best to evaluate the differences between various obesity groups. 514 pediatric individuals were recruited. 132 children with MetS, 155 morbid obese (MO), 90 obese (OB), 38 overweight (OW) and 99 children with normal BMI (N-BMI) were included into the scope of this study. Obesity groups were constituted using age- and sex-dependent body mass index (BMI) percentiles tabulated by World Health Organization. MetS components were determined to be able to specify children with MetS. CBC were determined using automated hematology analyzer. HDL-C analysis was performed. Using CBC parameters and HDL-C values, ratio markers of inflammation, which cover neutrophil-to-lymphocyte ratio (NLR), derived neutrophil-to-lymphocyte ratio (dNLR), platelet-to-lymphocyte ratio (PLR), lymphocyte-to-monocyte ratio (LMR), monocyte-to-HDL-C ratio (MHR) were calculated. Statistical analyses were performed. The statistical significance degree was considered as p < 0.05. There was no statistically significant difference among the groups in terms of platelet count, neutrophil count, lymphocyte count, monocyte count, and NLR. PLR differed significantly between OW and N-BMI as well as MetS. Monocyte-to HDL-C value exhibited statistical significance between MetS and N-BMI, OB, and MO groups. HDL-C value differed between MetS and N-BMI, OW, OB, MO groups. MHR was the ratio, which exhibits the best performance among the other CBC-based inflammatory markers. On the other hand, when MHR was compared to HDL-C only, it was suggested that HDL-C has given much more valuable information. Therefore, this parameter still keeps its value from the diagnostic point of view. Our results suggest that MHR can be an inflammatory marker during the evaluation of pediatric MetS, but the predictive value of this parameter was not superior to HDL-C during the evaluation of obesity.
Keywords: Children, complete blood cell count, high density lipoprotein cholesterol, metabolic syndrome, obesity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85062 Technique for Online Condition Monitoring of Surge Arrestors
Authors: Anil S. Khopkar, Kartik S. Pandya
Abstract:
Lightning overvoltage phenomenon in power systems cannot be avoided; however, it can be controlled to certain extent. To prevent system failure, power system equipment must be protected against overvoltage. Metal Oxide Surge Arrestors (MOSA) are connected in the system to provide protection against overvoltages. Under normal working conditions, MOSA function as, insulators, offering a conductive path during overvoltage events. MOSA consists of zinc oxide elements (ZnO Blocks) which has non-linear V-I characteristics. The ZnO blocks are connected in series and fitted in ceramic or polymer housing. Over time, these components degrade due to continuous operation. The degradation of zinc oxide elements increases the leakage current flowing through the surge arrestors. This increased leakage current results in elevated temperatures within the surge arrester, further decreasing the resistance of the zinc oxide elements. Consequently, the leakage current increases, leading to higher temperatures within the MOSA. This cycle creates thermal runaway conditions for the MOSA. Once a surge arrester reaches the thermal runaway condition, it cannot return to normal working conditions. This condition is a primary cause of premature failure of surge arrestors. Given that MOSA constitutes a core protective device for electrical power systems against transients, it contributes significantly to the reliable operation of power system networks. Therefore, periodic condition monitoring of surge arrestors is essential. Both online and offline condition monitoring techniques are available for surge arrestors. Offline condition monitoring techniques are not as popular because they require the removal of surge arrestors from the system, which requires system shutdown. Therefore, online condition monitoring techniques are more commonly used. This paper presents an evaluation technique for the surge arrester condition based on leakage current analysis. The maximum amplitudes of total leakage current (IT), fundamental resistive leakage current (IR), and third harmonic resistive leakage current (I3rd) are analyzed as indicators for surge arrester condition monitoring.
Keywords: Metal Oxide Surge Arrester, MOSA, Over voltage, total leakage current, resistive leakage current, third harmonic resistive leakage current, capacitive leakage current.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8561 The Estimation Method of Stress Distribution for Beam Structures Using the Terrestrial Laser Scanning
Authors: Sang Wook Park, Jun Su Park, Byung Kwan Oh, Yousok Kim, Hyo Seon Park
Abstract:
This study suggests the estimation method of stress distribution for the beam structures based on TLS (Terrestrial Laser Scanning). The main components of method are the creation of the lattices of raw data from TLS to satisfy the suitable condition and application of CSSI (Cubic Smoothing Spline Interpolation) for estimating stress distribution. Estimation of stress distribution for the structural member or the whole structure is one of the important factors for safety evaluation of the structure. Existing sensors which include ESG (Electric strain gauge) and LVDT (Linear Variable Differential Transformer) can be categorized as contact type sensor which should be installed on the structural members and also there are various limitations such as the need of separate space where the network cables are installed and the difficulty of access for sensor installation in real buildings. To overcome these problems inherent in the contact type sensors, TLS system of LiDAR (light detection and ranging), which can measure the displacement of a target in a long range without the influence of surrounding environment and also get the whole shape of the structure, has been applied to the field of structural health monitoring. The important characteristic of TLS measuring is a formation of point clouds which has many points including the local coordinate. Point clouds are not linear distribution but dispersed shape. Thus, to analyze point clouds, the interpolation is needed vitally. Through formation of averaged lattices and CSSI for the raw data, the method which can estimate the displacement of simple beam was developed. Also, the developed method can be extended to calculate the strain and finally applicable to estimate a stress distribution of a structural member. To verify the validity of the method, the loading test on a simple beam was conducted and TLS measured it. Through a comparison of the estimated stress and reference stress, the validity of the method is confirmed.Keywords: Structural health monitoring, terrestrial laser scanning, estimation of stress distribution, coordinate transformation, cubic smoothing spline interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 274360 Steady State Rolling and Dynamic Response of a Tire at Low Frequency
Authors: Md Monir Hossain, Anne Staples, Kuya Takami, Tomonari Furukawa
Abstract:
Tire noise has a significant impact on ride quality and vehicle interior comfort, even at low frequency. Reduction of tire noise is especially important due to strict state and federal environmental regulations. The primary sources of tire noise are the low frequency structure-borne noise and the noise that originates from the release of trapped air between the tire tread and road surface during each revolution of the tire. The frequency response of the tire changes at low and high frequency. At low frequency, the tension and bending moment become dominant, while the internal structure and local deformation become dominant at higher frequencies. Here, we analyze tire response in terms of deformation and rolling velocity at low revolution frequency. An Abaqus FEA finite element model is used to calculate the static and dynamic response of a rolling tire under different rolling conditions. The natural frequencies and mode shapes of a deformed tire are calculated with the FEA package where the subspace-based steady state dynamic analysis calculates dynamic response of tire subjected to harmonic excitation. The analysis was conducted on the dynamic response at the road (contact point of tire and road surface) and side nodes of a static and rolling tire when the tire was excited with 200 N vertical load for a frequency ranging from 20 to 200 Hz. The results show that frequency has little effect on tire deformation up to 80 Hz. But between 80 and 200 Hz, the radial and lateral components of displacement of the road and side nodes exhibited significant oscillation. For the static analysis, the fluctuation was sharp and frequent and decreased with frequency. In contrast, the fluctuation was periodic in nature for the dynamic response of the rolling tire. In addition to the dynamic analysis, a steady state rolling analysis was also performed on the tire traveling at ground velocity with a constant angular motion. The purpose of the computation was to demonstrate the effect of rotating motion on deformation and rolling velocity with respect to a fixed Newtonian reference point. The analysis showed a significant variation in deformation and rolling velocity due to centrifugal and Coriolis acceleration with respect to a fixed Newtonian point on ground.Keywords: Natural frequency, rotational motion, steady state rolling, subspace-based steady state dynamic analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 132159 Miniature Fast Steering Mirrors for Space Optical Communication on NanoSats and CubeSats
Authors: Sylvain Chardon, Timotéo Payre, Hugo Grardel, Yann Quentel, Mathieu Thomachot, Gérald Aigouy, Frank Claeyssen
Abstract:
With the increasing digitalization of society, access to data has become vital and strategic for individuals and nations. In this context, the number of satellite constellation projects is growing drastically worldwide and is a next-generation challenge of the New Space industry. So far, existing satellite constellations have been using radio frequencies (RF) for satellite-to-ground communications, inter-satellite communications, and feeder link communication. However, RF has several limitations, such as limited bandwidth and low protection level. To address these limitations, space optical communication will be the new trend, addressing both very high-speed and secured encrypted communication. Fast Steering Mirrors (FSM) are key components used in optical communication as well as space imagery and for a large field of functions such as Point Ahead Mechanisms (PAM), Raster Scanning, Beam Steering Mirrors (BSM), Fine Pointing Mechanisms (FPM) and Line of Sight stabilization (LOS). The main challenges of space FSM development for optical communication are to propose both a technology and a supply chain relevant for high quantities New Space approach, which requires secured connectivity for high-speed internet, Earth planet observation and monitoring, and mobility applications. CTEC proposes a mini-FSM technology offering a stroke of +/-6 mrad and a resonant frequency of 1700 Hz, with a mass of 50 g. This FSM mechanism is a good candidate for giant constellations and all applications on board NanoSats and CubeSats, featuring a very high level of miniaturization and optimized for New Space high quantities cost efficiency. The use of piezo actuators offers a high resonance frequency for optimal control, with almost zero power consumption in step and stay pointing, and with very high-reliability figures > 0,995 demonstrated over years of recurrent manufacturing for Optronics applications at CTEC.
Keywords: Fast steering mirror, feeder link, line of sight stabilization, optical communication, pointing ahead mechanism, raster scan.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 178