Search results for: cloud service models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10216

Search results for: cloud service models

6616 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (Rsm)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarse-aggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: mix proportioning, response surface methodology, compressive strength, optimal design

Procedia PDF Downloads 248
6615 Statistical Design of Central Point for Evaluate the Combination of PH and Cinnamon Essential Oil on the Antioxidant Activity Using the ABTS Technique

Authors: H. Minor-Pérez, A. M. Mota-Silva, S. Ortiz-Barrios

Abstract:

Substances of vegetable origin with antioxidant capacity have a high potential for application on the conservation of some foods, can prevent or reduce for example oxidation of lipids. However a food is a complex system whose wide variety of components wich can reduce or eliminate this antioxidant capacity. The antioxidant activity can be determined with the ABTS technique. The radical ABTS+ is generated from the acid 2, 2´ - Azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS). This radical is a composite color bluish-green, stable and with a spectrum of absorption into the UV-visible. The addition of antioxidants causes discoloration, value that can be reported as a percentage of inhibition of the cation radical ABTS+. The objective of this study was evaluated the effect of the combination of the pH and the essential oil of cinnamon (EOC) on inhibition of the radical ABTS+, using statistical design of central point (Design Expert) to obtain mathematical models that describe this phenomenon. Were evaluated 17 treatments with combinations of pH 5, 6 and 7 (citrate-phosphate buffer) and the concentration of essential oil of cinnamon (C): 0 µg/mL, 100 µg/mL and 200 µg/mL. The samples were analyzed using the ABTS technique. The reagent was dissolved in methanol 80% to standardized the absorbance to 0.7 +/- 0.1 at 754 nm. Then samples were mixed with reagent standardized ABTS and after 1 min and 7 min absorbance was read for each treatment at 754 nm. Was used a curve pattern with vitamin C and reported the values as inhibition (%) of radical ABTS+. The statistical analysis shows the experimental results were adjusted to a quadratic model, to the times of 1 min and 7 min. This model describes the influence of the factors investigated independently: pH and cinnamon essential oil (µg/mL) and the effect of the interaction between pH*C, as well as the square of the pH2 and C2. The model obtained was Y = 10.33684 - 3.98118*pH + 1.17031*C + 0.62745*pH2 - 3.26675*10-3*C2 - 0.013112*pH*C, where Y is the response variable. The coefficient of determination was 0.9949 for 1 min. The equation was obtained at 7 min and = - 10.89710 + 1.52341*pH + 1.32892*C + 0.47953*pH2 - 3.56605*10- *C2 - 0.034687*pH*C. The coefficient of determination was 0.9970. This means that only 1% of the total variation is not explained by the developed models. At 100 µg/mL of EOC was obtained an inhibition percentage of 80%, 84% and 97% for the pH values of 5,6 and 7 respectively, while a value of 200 µg/mL the inhibition (%) was very similar for the treatments. In these values of pH was obtained an inhibition close 97%. In conclusion the pH does not have a significant effect on the antioxidant capacity, while the concentration of EOC was decisive for the antioxidant capacity. The authors acknowledge the funding provided by the CONACYT for the project 131998.

Keywords: antioxidant activity, ABTS technique, essential oil of cinnamon, mathematical models

Procedia PDF Downloads 389
6614 Destructive and Nondestructive Characterization of Advanced High Strength Steels DP1000/1200

Authors: Carla M. Machado, André A. Silva, Armando Bastos, Telmo G. Santos, J. Pamies Teixeira

Abstract:

Advanced high-strength steels (AHSS) are increasingly being used in automotive components. The use of AHSS sheets plays an important role in reducing weight, as well as increasing the resistance to impact in vehicle components. However, the large-scale use of these sheets becomes more difficult due to the limitations during the forming process. Such limitations are due to the elastically driven change of shape of a metal sheet during unloading and following forming, known as the springback effect. As the magnitude of the springback tends to increase with the strength of the material, it is among the most worrisome problems in the use of AHSS steels. The prediction of strain hardening, especially under non-proportional loading conditions, is very limited due to the lack of constitutive models and mainly due to very limited experimental tests. It is very clear from the literature that in experimental terms there is not much work to evaluate deformation behavior under real conditions, which implies a very limited and scarce development of mathematical models for these conditions. The Bauschinger effect is also fundamental to the difference between kinematic and isotropic hardening models used to predict springback in sheet metal forming. It is of major importance to deepen the phenomenological knowledge of the mechanical and microstructural behavior of the materials, in order to be able to reproduce with high fidelity the behavior of extension of the materials by means of computational simulation. For this, a multi phenomenological analysis and characterization are necessary to understand the various aspects involved in plastic deformation, namely the stress-strain relations and also the variations of electrical conductivity and magnetic permeability associated with the metallurgical changes due to plastic deformation. Aiming a complete mechanical-microstructural characterization, uniaxial tensile tests involving successive cycles of loading and unloading were performed, as well as biaxial tests such as the Erichsen test. Also, nondestructive evaluation comprising eddy currents to verify microstructural changes due to plastic deformation and ultrasonic tests to evaluate the local variations of thickness were made. The material parameters for the stable yield function and the monotonic strain hardening were obtained using uniaxial tension tests in different material directions and balanced biaxial tests. Both the decrease of the modulus of elasticity and Bauschinger effect were determined through the load-unload tensile tests. By means of the eddy currents tests, it was possible to verify changes in the magnetic permeability of the material according to the different plastically deformed areas. The ultrasonic tests were an important aid to quantify the local plastic extension. With these data, it is possible to parameterize the different models of kinematic hardening to better approximate the results obtained by simulation with the experimental results, which are fundamental for the springback prediction of the stamped parts.

Keywords: advanced high strength steel, Bauschinger effect, sheet metal forming, springback

Procedia PDF Downloads 211
6613 Using Geospatial Analysis to Reconstruct the Thunderstorm Climatology for the Washington DC Metropolitan Region

Authors: Mace Bentley, Zhuojun Duan, Tobias Gerken, Dudley Bonsal, Henry Way, Endre Szakal, Mia Pham, Hunter Donaldson, Chelsea Lang, Hayden Abbott, Leah Wilcynzski

Abstract:

Air pollution has the potential to modify the lifespan and intensity of thunderstorms and the properties of lightning. Using data mining and geovisualization, we investigate how background climate and weather conditions shape variability in urban air pollution and how this, in turn, shapes thunderstorms as measured by the intensity, distribution, and frequency of cloud-to-ground lightning. A spatiotemporal analysis was conducted in order to identify thunderstorms using high-resolution lightning detection network data. Over seven million lightning flashes were used to identify more than 196,000 thunderstorms that occurred between 2006 - 2020 in the Washington, DC Metropolitan Region. Each lightning flash in the dataset was grouped into thunderstorm events by means of a temporal and spatial clustering algorithm. Once the thunderstorm event database was constructed, hourly wind direction, wind speed, and atmospheric thermodynamic data were added to the initiation and dissipation times and locations for the 196,000 identified thunderstorms. Hourly aerosol and air quality data for the thunderstorm initiation times and locations were also incorporated into the dataset. Developing thunderstorm climatologies using a lightning tracking algorithm and lightning detection network data was found to be useful for visualizing the spatial and temporal distribution of urban augmented thunderstorms in the region.

Keywords: lightning, urbanization, thunderstorms, climatology

Procedia PDF Downloads 56
6612 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling

Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo

Abstract:

Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.

Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield

Procedia PDF Downloads 424
6611 Performance Comparison of Different Regression Methods for a Polymerization Process with Adaptive Sampling

Authors: Florin Leon, Silvia Curteanu

Abstract:

Developing complete mechanistic models for polymerization reactors is not easy, because complex reactions occur simultaneously; there is a large number of kinetic parameters involved and sometimes the chemical and physical phenomena for mixtures involving polymers are poorly understood. To overcome these difficulties, empirical models based on sampled data can be used instead, namely regression methods typical of machine learning field. They have the ability to learn the trends of a process without any knowledge about its particular physical and chemical laws. Therefore, they are useful for modeling complex processes, such as the free radical polymerization of methyl methacrylate achieved in a batch bulk process. The goal is to generate accurate predictions of monomer conversion, numerical average molecular weight and gravimetrical average molecular weight. This process is associated with non-linear gel and glass effects. For this purpose, an adaptive sampling technique is presented, which can select more samples around the regions where the values have a higher variation. Several machine learning methods are used for the modeling and their performance is compared: support vector machines, k-nearest neighbor, k-nearest neighbor and random forest, as well as an original algorithm, large margin nearest neighbor regression. The suggested method provides very good results compared to the other well-known regression algorithms.

Keywords: batch bulk methyl methacrylate polymerization, adaptive sampling, machine learning, large margin nearest neighbor regression

Procedia PDF Downloads 288
6610 Surface Characterization of Zincblende and Wurtzite Semiconductors Using Nonlinear Optics

Authors: Hendradi Hardhienata, Tony Sumaryada, Sri Setyaningsih

Abstract:

Current progress in the field of nonlinear optics has enabled precise surface characterization in semiconductor materials. Nonlinear optical techniques are favorable due to their nondestructive measurement and ability to work in nonvacuum and ambient conditions. The advance of the bond hyperpolarizability models opens a wide range of nanoscale surface investigation including the possibility to detect molecular orientation at the surface of silicon and zincblende semiconductors, investigation of electric field induced second harmonic fields at the semiconductor interface, detection of surface impurities, and very recently, study surface defects such as twin boundary in wurtzite semiconductors. In this work, we show using nonlinear optical techniques, e.g. nonlinear bond models how arbitrary polarization of the incoming electric field in Rotational Anisotropy Spectroscopy experiments can provide more information regarding the origin of the nonlinear sources in zincblende and wurtzite semiconductor structure. In addition, using hyperpolarizability consideration, we describe how the nonlinear susceptibility tensor describing SHG can be well modelled using only few parameter because of the symmetry of the bonds. We also show how the third harmonic intensity feature shows considerable changes when the incoming field polarization angle is changed from s-polarized to p-polarized. We also propose a method how to investigate surface reconstruction and defects in wurtzite and zincblende structure at the nanoscale level.

Keywords: surface characterization, bond model, rotational anisotropy spectroscopy, effective hyperpolarizability

Procedia PDF Downloads 143
6609 Primary Health Care Vital Signs Profile in Malaysia: Challenges and Opportunities

Authors: Rachel Koshy, Nazrila Hairizan Bt. Nasir, Samsiah Bt. Awang, Kamaliah Bt. Mohamad Noh

Abstract:

Malaysia collaborated as a ‘trailblazer’ country with PHCPI (Primary Health Care Performance Initiative) to populate the Primary Health Care (PHC) Vital Signs Profile (VSP) for the country. The PHC VSP provides an innovative snapshot of the primary health care system's performance. Four domains were assessed: system financing, system capacity, system performance, and system equity, and completed in 2019. There were two phases using a mixed method study design. The first phase involved a quantitative study, utilising existing secondary data from national and international sources. In the case of unavailability of data for any indicators, comparable alternative indicators were used. The second phase was a mixed quantitative-qualitative approach to measure the functional capacity based on governance and leadership, population health needs, inputs, population health management, and facility organisation and management. PHC spending constituted 35% of overall health spending in Malaysia, with a per capita PHC spending of $152. The capacity domain was strong in the three subdomains of governance and leadership, information system, and funds management. The two subdomains of drugs & supplies and facility organisation & management had low scores, but the lowest score was in empanelment of the population under the population health management. The PHC system performed with an access index of 98%, quality index of 84%, and service coverage of 62%. In the equity domain, there was little fluctuation in the coverage of reproductive, maternal, newborn, and child health services by mother’s level of education and under-five child mortality between urban and rural areas. The public sector was stronger in the capacity domain as compared to the private sector. This is due to the different financing, organisational structures, and service delivery mechanism. The VSP has identified areas for improvement in the effort to provide high-quality PHC for the population. The gaps in PHC can be addressed through the system approach and the positioning of public and private primary health care delivery systems.

Keywords: primary health care, health system, system domains, vital signs profile

Procedia PDF Downloads 112
6608 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 219
6607 Blended Learning through Google Classroom

Authors: Lee Bih Ni

Abstract:

This paper discusses that good learning involves all academic groups in the school. Blended learning is learning outside the classroom. Google Classroom is a free service learning app for schools, non-profit organizations and anyone with a personal Google account. Facilities accessed through computers and mobile phones are very useful for school teachers and students. Blended learning classrooms using both traditional and technology-based methods for teaching have become the norm for many educators. Using Google Classroom gives students access to online learning. Even if the teacher is not in the classroom, the teacher can provide learning. This is the supervision of the form of the teacher when the student is outside the school.

Keywords: blended learning, learning app, google classroom, schools

Procedia PDF Downloads 126
6606 The Role of Ecotourism Development in the Financing of Conservation Initiatives in Cameroon’s Protected Areas: Lessons from the Campo Ma’an National Park

Authors: Nyong Princely Awazi, Gadinga Walter Forje, Barnabas Neba Nfornkah, Ndzifon Jude Kimengsi

Abstract:

Ecotourism is documented as a sustainable measure of bridging conservation goals and livelihood sustenance around protected areas, due to its ability of not just providing alternative livelihood, but also in providing the necessary resources that can help finance conservation initiatives. In Cameroon, all ecotourism activities around national parks are aimed at generating revenue through the conservation service while providing sustainable livelihood options to the local population. There exists an information lacuna regarding the contribution of ecotourism finances to conservation efforts in the country. This study was aimed at establishing the contribution of ecotourism finances to conservation initiatives in and around the Campo Ma’an National Park (CMNP). Data were collected through the administering of 120 structured questionnaires to ecotourism actors and 15 key/expert interviews with tourism and conservation actors in the Campo Ma’an landscape. Chi-square test, Spearman’s rank correlation and regressions were used for data analysis. The study revealed that the main sources of ecotourism financing to the park service are through entrance fees, cameras and vehicle fees paid by tourists as well as ecotourism project financing through NGOs. Calculations from the tourism register of the park showed that the park was able to raise as much as 1,576,000 FCFA (US$ 3,152) annually. It was further established that ecotourism revenue has not greatly supported conservation, with 54% of respondents perceiving ecotourism not contributing to biodiversity conservation. Chi Square test results highlighted poor ecotourism governance, low level of ecotourism development, corruption from park management staff, obsolete nature of the current finance law on the management of protected area revenue as key factors hindering ecotourism financing in conservation. For ecotourism financing to contribute to biodiversity conservation in the CMNP and in Cameroon’s protected areas, the government needs to revise the finance law on the management of revenue generated from protected areas, improve park governance to fight corruption and enhance transparency, invest in the development and marketing of the Campo Ma’an national park as a tourism destination in the country.

Keywords: Cameroon, Campo Ma’an National Park, conservation, ecotourism, ecotourism financing

Procedia PDF Downloads 97
6605 Depth-Averaged Modelling of Erosion and Sediment Transport in Free-Surface Flows

Authors: Thomas Rowan, Mohammed Seaid

Abstract:

A fast finite volume solver for multi-layered shallow water flows with mass exchange and an erodible bed is developed. This enables the user to solve a number of complex sediment-based problems including (but not limited to), dam-break over an erodible bed, recirculation currents and bed evolution as well as levy and dyke failure. This research develops methodologies crucial to the under-standing of multi-sediment fluvial mechanics and waterway design. In this model mass exchange between the layers is allowed and, in contrast to previous models, sediment and fluid are able to transfer between layers. In the current study we use a two-step finite volume method to avoid the solution of the Riemann problem. Entrainment and deposition rates are calculated for the first time in a model of this nature. In the first step the governing equations are rewritten in a non-conservative form and the intermediate solutions are calculated using the method of characteristics. In the second stage, the numerical fluxes are reconstructed in conservative form and are used to calculate a solution that satisfies the conservation property. This method is found to be considerably faster than other comparative finite volume methods, it also exhibits good shock capturing. For most entrainment and deposition equations a bed level concentration factor is used. This leads to inaccuracies in both near bed level concentration and total scour. To account for diffusion, as no vertical velocities are calculated, a capacity limited diffusion coefficient is used. The additional advantage of this multilayer approach is that there is a variation (from single layer models) in bottom layer fluid velocity: this dramatically reduces erosion, which is often overestimated in simulations of this nature using single layer flows. The model is used to simulate a standard dam break. In the dam break simulation, as expected, the number of fluid layers utilised creates variation in the resultant bed profile, with more layers offering a higher deviation in fluid velocity . These results showed a marked variation in erosion profiles from standard models. The overall the model provides new insight into the problems presented at minimal computational cost.

Keywords: erosion, finite volume method, sediment transport, shallow water equations

Procedia PDF Downloads 205
6604 MB-Slam: A Slam Framework for Construction Monitoring

Authors: Mojtaba Noghabaei, Khashayar Asadi, Kevin Han

Abstract:

Simultaneous Localization and Mapping (SLAM) technology has recently attracted the attention of construction companies for real-time performance monitoring. To effectively use SLAM for construction performance monitoring, SLAM results should be registered to a Building Information Models (BIM). Registring SLAM and BIM can provide essential insights for construction managers to identify construction deficiencies in real-time and ultimately reduce rework. Also, registering SLAM to BIM in real-time can boost the accuracy of SLAM since SLAM can use features from both images and 3d models. However, registering SLAM with the BIM in real-time is a challenge. In this study, a novel SLAM platform named Model-Based SLAM (MB-SLAM) is proposed, which not only provides automated registration of SLAM and BIM but also improves the localization accuracy of the SLAM system in real-time. This framework improves the accuracy of SLAM by aligning perspective features such as depth, vanishing points, and vanishing lines from the BIM to the SLAM system. This framework extracts depth features from a monocular camera’s image and improves the localization accuracy of the SLAM system through a real-time iterative process. Initially, SLAM can be used to calculate a rough camera pose for each keyframe. In the next step, each SLAM video sequence keyframe is registered to the BIM in real-time by aligning the keyframe’s perspective with the equivalent BIM view. The alignment method is based on perspective detection that estimates vanishing lines and points by detecting straight edges on images. This process will generate the associated BIM views from the keyframes' views. The calculated poses are later improved during a real-time gradient descent-based iteration method. Two case studies were presented to validate MB-SLAM. The validation process demonstrated promising results and accurately registered SLAM to BIM and significantly improved the SLAM’s localization accuracy. Besides, MB-SLAM achieved real-time performance in both indoor and outdoor environments. The proposed method can fully automate past studies and generate as-built models that are aligned with BIM. The main contribution of this study is a SLAM framework for both research and commercial usage, which aims to monitor construction progress and performance in a unified framework. Through this platform, users can improve the accuracy of the SLAM by providing a rough 3D model of the environment. MB-SLAM further boosts the application to practical usage of the SLAM.

Keywords: perspective alignment, progress monitoring, slam, stereo matching.

Procedia PDF Downloads 199
6603 Investigate the Mechanical Effect of Different Root Analogue Models to Soil Strength

Authors: Asmaa Al Shafiee, Erdin Ibraim

Abstract:

Stabilizing slopes by using vegetation is considered as a cost-effective and eco-friendly alternative to the conventional methods. The main aim of this study is to investigate the mechanical effect of analogue root systems on the shear strength of different soil types. Three objectives were defined to achieve the main aim of this paper. Firstly, explore the effect of root architectural design to shear strength parameters. Secondly, study the effect of root area ratio (RAR) on the shear strength of two different soil types. Finally, to investigate how different kinds of soil can affect the behavior of the roots during shear failure. 3D printing tool was used to develop different analogue tap root models with different architectural designs. Direct shear tests were performed on Leighton Buzzard (LB) fraction B sand, which represents a coarse sand and Huston sand, which represent medium-coarse sand. All tests were done with the same relative density for both kinds of sand. The results of the direct shear test indicated that using plant roots will increase both friction angle and cohesion of soil. Additionally, different root designs affected differently the shear strength of the soil. Furthermore, the directly proportional relationship was found between root area ratio for the same root design and shear strength parameters of soil. Finally, the root area ratio effect should be combined with branches penetrating the shear plane to get the highest results.

Keywords: leighton buzzard sand, root area ratio, rooted soil, shear strength, slope stabilization

Procedia PDF Downloads 132
6602 Use of Front-Face Fluorescence Spectroscopy and Multiway Analysis for the Prediction of Olive Oil Quality Features

Authors: Omar Dib, Rita Yaacoub, Luc Eveleigh, Nathalie Locquet, Hussein Dib, Ali Bassal, Christophe B. Y. Cordella

Abstract:

The potential of front-face fluorescence coupled with chemometric techniques, namely parallel factor analysis (PARAFAC) and multiple linear regression (MLR) as a rapid analysis tool to characterize Lebanese virgin olive oils was investigated. Fluorescence fingerprints were acquired directly on 102 Lebanese virgin olive oil samples in the range of 280-540 nm in excitation and 280-700 nm in emission. A PARAFAC model with seven components was considered optimal with a residual of 99.64% and core consistency value of 78.65. The model revealed seven main fluorescence profiles in olive oil and was mainly associated with tocopherols, polyphenols, chlorophyllic compounds and oxidation/hydrolysis products. 23 MLR regression models based on PARAFAC scores were generated, the majority of which showed a good correlation coefficient (R > 0.7 for 12 predicted variables), thus satisfactory prediction performances. Acid values, peroxide values, and Delta K had the models with the highest predictions, with R values of 0.89, 0.84 and 0.81 respectively. Among fatty acids, linoleic and oleic acids were also highly predicted with R values of 0.8 and 0.76, respectively. Factors contributing to the model's construction were related to common fluorophores found in olive oil, mainly chlorophyll, polyphenols, and oxidation products. This study demonstrates the interest of front-face fluorescence as a promising tool for quality control of Lebanese virgin olive oils.

Keywords: front-face fluorescence, Lebanese virgin olive oils, multiple Linear regressions, PARAFAC analysis

Procedia PDF Downloads 436
6601 Administrative Supervision of Local Authorities’ Activities in Selected European Countries

Authors: Alina Murtishcheva

Abstract:

The development of an effective system of administrative supervision is a prerequisite for the functioning of local self-government on the basis of the rule of law. Administrative supervision of local self-government is of particular importance in the EU countries due to the influence of integration processes. The central authorities act on the international level; however, subnational authorities also have to implement European legislation in order to strengthen integration. Therefore, the central authority, being the connecting link between supranational and subnational authorities, should bear responsibility, including financial responsibility, for possible mistakes of subnational authorities. Consequently, the state should have sufficient mechanisms of control over local and regional authorities in order to correct their mistakes. At the same time, the control mechanisms do not deny the autonomy of local self-government. The paper analyses models of administrative supervision of local self-government in Ukraine, Poland, Lithuania, Belgium, Great Britain, Italy, and France. The research methods used in this paper are theoretical methods of analysis of scientific literature, constitutions, legal acts, Congress of Local and Regional Authorities of the Council of Europe reports, and constitutional court decisions, as well as comparative and logical analysis. The legislative basis of administrative supervision was scrutinized, and the models of administrative supervision were classified, including a priori control and ex-post control or their combination. The advantages and disadvantages of these models of administrative supervision are analysed. Compliance with Article 8 of the European Charter of Local Self-Government is of great importance for countries achieving common goals and sharing common values. However, countries under study have problems and, in some cases, demonstrate non-compliance with provisions of Article 8. Such non-conformity as the endorsement of a mayor by the Flemish Government in Belgium, supervision with a view to expediency in Great Britain, and the tendency to overuse supervisory power in Poland are analysed. On the basis of research, the tendencies of administrative supervision of local authorities’ activities in selected European countries are described. Several recommendations for Ukraine as a country that had been granted the EU candidate status are formulated. Having emphasised its willingness to become a member of the European community, Ukraine should not only follow the best European practices but also avoid the mistakes of countries that have long-term experience in developing the local self-government institution. This project has received funding from the Research Council of Lithuania (LMTLT), agreement № P-PD-22-194

Keywords: administrative supervision, decentralisation, legality, local authorities, local self-government

Procedia PDF Downloads 46
6600 Simulation of Multistage Extraction Process of Co-Ni Separation Using Ionic Liquids

Authors: Hongyan Chen, Megan Jobson, Andrew J. Masters, Maria Gonzalez-Miquel, Simon Halstead, Mayri Diaz de Rienzo

Abstract:

Ionic liquids offer excellent advantages over conventional solvents for industrial extraction of metals from aqueous solutions, where such extraction processes bring opportunities for recovery, reuse, and recycling of valuable resources and more sustainable production pathways. Recent research on the use of ionic liquids for extraction confirms their high selectivity and low volatility, but there is relatively little focus on how their properties can be best exploited in practice. This work addresses gaps in research on process modelling and simulation, to support development, design, and optimisation of these processes, focusing on the separation of the highly similar transition metals, cobalt, and nickel. The study exploits published experimental results, as well as new experimental results, relating to the separation of Co and Ni using trihexyl (tetradecyl) phosphonium chloride. This extraction agent is attractive because it is cheaper, more stable and less toxic than fluorinated hydrophobic ionic liquids. This process modelling work concerns selection and/or development of suitable models for the physical properties, distribution coefficients, for mass transfer phenomena, of the extractor unit and of the multi-stage extraction flowsheet. The distribution coefficient model for cobalt and HCl represents an anion exchange mechanism, supported by the literature and COSMO-RS calculations. Parameters of the distribution coefficient models are estimated by fitting the model to published experimental extraction equilibrium results. The mass transfer model applies Newman’s hard sphere model. Diffusion coefficients in the aqueous phase are obtained from the literature, while diffusion coefficients in the ionic liquid phase are fitted to dynamic experimental results. The mass transfer area is calculated from the surface to mean diameter of liquid droplets of the dispersed phase, estimated from the Weber number inside the extractor. New experiments measure the interfacial tension between the aqueous and ionic phases. The empirical models for predicting the density and viscosity of solutions under different metal loadings are also fitted to new experimental data. The extractor is modelled as a continuous stirred tank reactor with mass transfer between the two phases and perfect phase separation of the outlet flows. A multistage separation flowsheet simulation is set up to replicate a published experiment and compare model predictions with the experimental results. This simulation model is implemented in gPROMS software for dynamic process simulation. The results of single stage and multi-stage flowsheet simulations are shown to be in good agreement with the published experimental results. The estimated diffusion coefficient of cobalt in the ionic liquid phase is in reasonable agreement with published data for the diffusion coefficients of various metals in this ionic liquid. A sensitivity study with this simulation model demonstrates the usefulness of the models for process design. The simulation approach has potential to be extended to account for other metals, acids, and solvents for process development, design, and optimisation of extraction processes applying ionic liquids for metals separations, although a lack of experimental data is currently limiting the accuracy of models within the whole framework. Future work will focus on process development more generally and on extractive separation of rare earths using ionic liquids.

Keywords: distribution coefficient, mass transfer, COSMO-RS, flowsheet simulation, phosphonium

Procedia PDF Downloads 170
6599 Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of In- Hospital Cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditions and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90mmHg, respiratory rate <8 or > 28 breaths per minute, O2 saturation < 90%, acute change in conscious state, acute chest pain or worried about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in In-Hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in In-Hospital cardiac arrest and overall hospital mortality rate. We conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed Chi -square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010 ,139 calls in 2011 and 245 calls in 2012.The number of ERT calls per 1000 admissions in year 2009-10 was 7.69, 5.61 in 2011 and 9.38 in 2013. The number of Code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, But we have not found difference in overall hospital mortality rate.

Keywords: emergency response team, ERT, cardiac arrest, emergency medicine

Procedia PDF Downloads 292
6598 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 109
6597 Measuring Enterprise Growth: Pitfalls and Implications

Authors: N. Šarlija, S. Pfeifer, M. Jeger, A. Bilandžić

Abstract:

Enterprise growth is generally considered as a key driver of competitiveness, employment, economic development and social inclusion. As such, it is perceived to be a highly desirable outcome of entrepreneurship for scholars and decision makers. The huge academic debate resulted in the multitude of theoretical frameworks focused on explaining growth stages, determinants and future prospects. It has been widely accepted that enterprise growth is most likely nonlinear, temporal and related to the variety of factors which reflect the individual, firm, organizational, industry or environmental determinants of growth. However, factors that affect growth are not easily captured, instruments to measure those factors are often arbitrary, causality between variables and growth is elusive, indicating that growth is not easily modeled. Furthermore, in line with heterogeneous nature of the growth phenomenon, there is a vast number of measurement constructs assessing growth which are used interchangeably. Differences among various growth measures, at conceptual as well as at operationalization level, can hinder theory development which emphasizes the need for more empirically robust studies. In line with these highlights, the main purpose of this paper is twofold. Firstly, to compare structure and performance of three growth prediction models based on the main growth measures: Revenues, employment and assets growth. Secondly, to explore the prospects of financial indicators, set as exact, visible, standardized and accessible variables, to serve as determinants of enterprise growth. Finally, to contribute to the understanding of the implications on research results and recommendations for growth caused by different growth measures. The models include a range of financial indicators as lag determinants of the enterprises’ performances during the 2008-2013, extracted from the national register of the financial statements of SMEs in Croatia. The design and testing stage of the modeling used the logistic regression procedures. Findings confirm that growth prediction models based on different measures of growth have different set of predictors. Moreover, the relationship between particular predictors and growth measure is inconsistent, namely the same predictor positively related to one growth measure may exert negative effect on a different growth measure. Overall, financial indicators alone can serve as good proxy of growth and yield adequate predictive power of the models. The paper sheds light on both methodology and conceptual framework of enterprise growth by using a range of variables which serve as a proxy for the multitude of internal and external determinants, but are unlike them, accessible, available, exact and free of perceptual nuances in building up the model. Selection of the growth measure seems to have significant impact on the implications and recommendations related to growth. Furthermore, the paper points out to potential pitfalls of measuring and predicting growth. Overall, the results and the implications of the study are relevant for advancing academic debates on growth-related methodology, and can contribute to evidence-based decisions of policy makers.

Keywords: growth measurement constructs, logistic regression, prediction of growth potential, small and medium-sized enterprises

Procedia PDF Downloads 234
6596 Influence of Single and Multiple Skin-Core Debonding on Free Vibration Characteristics of Innovative GFRP Sandwich Panels

Authors: Indunil Jayatilake, Warna Karunasena, Weena Lokuge

Abstract:

An Australian manufacturer has fabricated an innovative GFRP sandwich panel made from E-glass fiber skin and a modified phenolic core for structural applications. Debonding, which refers to separation of skin from the core material in composite sandwiches, is one of the most common types of damage in composites. The presence of debonding is of great concern because it not only severely affects the stiffness but also modifies the dynamic behaviour of the structure. Generally, it is seen that the majority of research carried out has been concerned about the delamination of laminated structures whereas skin-core debonding has received relatively minor attention. Furthermore, it is observed that research done on composite slabs having multiple skin-core debonding is very limited. To address this gap, a comprehensive research investigating dynamic behaviour of composite panels with single and multiple debonding is presented. The study uses finite-element modelling and analyses for investigating the influence of debonding on free vibration behaviour of single and multilayer composite sandwich panels. A broad parametric investigation has been carried out by varying debonding locations, debonding sizes and support conditions of the panels in view of both single and multiple debonding. Numerical models were developed with Strand7 finite element package by innovatively selecting the suitable elements to diligently represent their actual behavior. Three-dimensional finite element models were employed to simulate the physically real situation as close as possible, with the use of an experimentally and numerically validated finite element model. Comparative results and conclusions based on the analyses are presented. For similar extents and locations of debonding, the effect of debonding on natural frequencies appears greatly dependent on the end conditions of the panel, giving greater decrease in natural frequency when the panels are more restrained. Some modes are more sensitive to debonding and this sensitivity seems to be related to their vibration mode shapes. The fundamental mode seems generally the least sensitive mode to debonding with respect to the variation in free vibration characteristics. The results indicate the effectiveness of the developed three-dimensional finite element models in assessing debonding damage in composite sandwich panels

Keywords: debonding, free vibration behaviour, GFRP sandwich panels, three dimensional finite element modelling

Procedia PDF Downloads 297
6595 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 147
6594 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers

Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi

Abstract:

Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.

Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics

Procedia PDF Downloads 149
6593 Brazilian Constitution and the Fundamental Right to Sanitation

Authors: Michely Vargas Delpupo, José Geraldo Romanello Bueno

Abstract:

The right to basic sanitation, was elevated to the category of fundamental right by the Brazilian Constitution of 1988 to protect the ecologically balanced environment, ensuring social rights to health and adequate housing warranting dignity of the human person as a principle of the Brazilian Democratic State. Because of their essentiality to the Brazilian population, this article seeks to understand why universal access to basic sanitation is a goal so difficult to achieve in Brazil. Therefore, this research uses the deductive and analytical method. Given the nature of the research literature, research techniques were centered in specialized books on the subject, journals, theses and dissertations, laws, relevant law case and raising social indicators relating to the theme. The relevance of the topic stems, among other things, the fact that sanitation services are essential for a dignified life, i.e. everyone is entitled to the maintenance of the necessary existence conditions are satisfied. However, the effectiveness of this right is undermined in society, since Brazil has huge deficit in sanitation services, denying thus a worthy life to most of the population. Thus, it can be seen that the provision of water and sewage services in Brazil is still characterized by a large imbalance, since the municipalities with lower population index have greater disability in the sanitation service. The truth is that the precariousness of water and sewage services in Brazil is still very concentrated in the North and Northeast regions, limiting the effective implementation of the Law 11.445/2007 in the country. Therefore, there is urgent need for a positive service by the State in the provision of sanitation services in order to prevent and control disease, improve quality of life and productivity of individuals, besides preventing contamination of water resources. More than just social and economic necessity, there is even a an obligation of the government to implement such services. In this sense, given the current scenario, to achieve universal access to basic sanitation imposes many hurdles. These are mainly in the field of properly formulated and implemented public policies, i.e. it requires an excellent institutional organization, management services, strategic planning, social control, in order to provide answers to complex challenges.

Keywords: fundamental rights, health, sanitation, universal access

Procedia PDF Downloads 388
6592 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas

Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee

Abstract:

U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.

Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood

Procedia PDF Downloads 124
6591 Long-Term Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of in-hospital cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditons and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90 mmHg, respiratory rate <8 or >28 breaths per minute, O2 saturation <90%, acute change in conscious state, acute chest pain or worry about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in in-hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in in-hospital cardiac arrest and overall hospital mortality rate, we conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed chi-square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010, 139 calls in 2011 and 245 calls in 2012. The number of ERT calls per 1000 admissions in year 2009-10 was 7.69; 5.61 in 2011 and 9.38 in 2013. The number of code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001 ). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, but we have not found difference in overall hospital mortality rate.

Keywords: cardiac arrest, outcome, in-hospital, ERT

Procedia PDF Downloads 183
6590 A Prediction Model Using the Price Cyclicality Function Optimized for Algorithmic Trading in Financial Market

Authors: Cristian Păuna

Abstract:

After the widespread release of electronic trading, automated trading systems have become a significant part of the business intelligence system of any modern financial investment company. An important part of the trades is made completely automatically today by computers using mathematical algorithms. The trading decisions are taken almost instantly by logical models and the orders are sent by low-latency automatic systems. This paper will present a real-time price prediction methodology designed especially for algorithmic trading. Based on the price cyclicality function, the methodology revealed will generate price cyclicality bands to predict the optimal levels for the entries and exits. In order to automate the trading decisions, the cyclicality bands will generate automated trading signals. We have found that the model can be used with good results to predict the changes in market behavior. Using these predictions, the model can automatically adapt the trading signals in real-time to maximize the trading results. The paper will reveal the methodology to optimize and implement this model in automated trading systems. After tests, it is proved that this methodology can be applied with good efficiency in different timeframes. Real trading results will be also displayed and analyzed in order to qualify the methodology and to compare it with other models. As a conclusion, it was found that the price prediction model using the price cyclicality function is a reliable trading methodology for algorithmic trading in the financial market.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, price prediction

Procedia PDF Downloads 168
6589 Characteristics and Flight Test Analysis of a Fixed-Wing UAV with Hover Capability

Authors: Ferit Çakıcı, M. Kemal Leblebicioğlu

Abstract:

In this study, characteristics and flight test analysis of a fixed-wing unmanned aerial vehicle (UAV) with hover capability is analyzed. The base platform is chosen as a conventional airplane with throttle, ailerons, elevator and rudder control surfaces, that inherently allows level flight. Then this aircraft is mechanically modified by the integration of vertical propellers as in multi rotors in order to provide hover capability. The aircraft is modeled using basic aerodynamical principles and linear models are constructed utilizing small perturbation theory for trim conditions. Flight characteristics are analyzed by benefiting from linear control theory’s state space approach. Distinctive features of the aircraft are discussed based on analysis results with comparison to conventional aircraft platform types. A hybrid control system is proposed in order to reveal unique flight characteristics. The main approach includes design of different controllers for different modes of operation and a hand-over logic that makes flight in an enlarged flight envelope viable. Simulation tests are performed on mathematical models that verify asserted algorithms. Flight tests conducted in real world revealed the applicability of the proposed methods in exploiting fixed-wing and rotary wing characteristics of the aircraft, which provide agility, survivability and functionality.

Keywords: flight test, flight characteristics, hybrid aircraft, unmanned aerial vehicle

Procedia PDF Downloads 309
6588 An Integrated Framework for Seismic Risk Mitigation Decision Making

Authors: Mojtaba Sadeghi, Farshid Baniassadi, Hamed Kashani

Abstract:

One of the challenging issues faced by seismic retrofitting consultants and employers is quick decision-making on the demolition or retrofitting of a structure at the current time or in the future. For this reason, the existing models proposed by researchers have only covered one of the aspects of cost, execution method, and structural vulnerability. Given the effect of each factor on the final decision, it is crucial to devise a new comprehensive model capable of simultaneously covering all the factors. This study attempted to provide an integrated framework that can be utilized to select the most appropriate earthquake risk mitigation solution for buildings. This framework can overcome the limitations of current models by taking into account several factors such as cost, execution method, risk-taking and structural failure. In the newly proposed model, the database and essential information about retrofitting projects are developed based on the historical data on a retrofit project. In the next phase, an analysis is conducted in order to assess the vulnerability of the building under study. Then, artificial neural networks technique is employed to calculate the cost of retrofitting. While calculating the current price of the structure, an economic analysis is conducted to compare demolition versus retrofitting costs. At the next stage, the optimal method is identified. Finally, the implementation of the framework was demonstrated by collecting data concerning 155 previous projects.

Keywords: decision making, demolition, construction management, seismic retrofit

Procedia PDF Downloads 220
6587 Multi-Dimensional (Quantatative and Qualatative) Longitudinal Research Methods for Biomedical Research of Post-COVID-19 (“Long Covid”) Symptoms

Authors: Steven G. Sclan

Abstract:

Background: Since December 2019, the world has been afflicted by the spread of the Severe Acute Respiratory Syndrome-Corona Virus-2 (SARS-CoV-2), which is responsible for the condition referred to as Covid-19. The illness has had a cataclysmic impact on the political, social, economic, and overall well-being of the population of the entire globe. While Covid-19 has had a substantial universal fatality impact, it may have an even greater effect on the socioeconomic, medical well-being, and healthcare planning for remaining societies. Significance: As these numbers illustrate, many more persons survive the infection than die from it, and many of those patients have noted ongoing, persistent symptoms after successfully enduring the acute phase of the illness. Recognition and understanding of these symptoms are crucial for developing and arranging efficacious models of care for all patients (whether or not having been hospitalized) surviving acute covid illness and plagued by post-acute symptoms. Furthermore, regarding Covid infection in children (< 18 y/o), although it may be that Covid “+” children are not major vectors of infective transmission, it now appears that many more children than initially thought are carrying the virus without accompanying obvious symptomatic expression. It seems reasonable to wonder whether viral effects occur in children – those children who are Covid “+” and now asymptomatic – and if, over time, they might also experience similar symptoms. An even more significant question is whether Covid “+” asymptomatic children might manifest increased multiple health problems as they grow – i.e., developmental complications (e.g., physical/medical, metabolic, neurobehavioral, etc.) – in comparison to children who had been consistently Covid “ - ” during the pandemic. Topics Addressed and Theoretical Importance: This review is important because of the description of both quantitative and qualitative methods for clinical and biomedical research. Topics reviewed will consider the importance of well-designed, comprehensive (i.e., quantitative and qualitative methods) longitudinal studies of Post Covid-19 symptoms in both adults and children. Also reviewed will be general characteristics of longitudinal studies and a presentation of a model for a proposed study. Also discussed will be the benefit of longitudinal studies for the development of efficacious interventions and for the establishment of cogent, practical, and efficacious community healthcare service planning for post-acute covid patients. Conclusion: Results of multi-dimensional, longitudinal studies will have important theoretical implications. These studies will help to improve our understanding of the pathophysiology of long COVID and will aid in the identification of potential targets for treatment. Such studies can also provide valuable insights into the long-term impact of COVID-19 on public health and socioeconomics.

Keywords: COVID-19, post-COVID-19, long COVID, longitudinal research, quantitative research, qualitative research

Procedia PDF Downloads 45