Search results for: analytical data
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26347

Search results for: analytical data

25867 A Higher Order Shear and Normal Deformation Theory for Functionally Graded Sandwich Beam

Authors: R. Bennai, H. Ait Atmane, Jr., A. Tounsi

Abstract:

In this work, a new analytical approach using a refined theory of hyperbolic shear deformation of a beam was developed to study the free vibration of graduated sandwiches beams under different boundary conditions. The effects of transverse shear strains and the transverse normal deformation are considered. The constituent materials of the beam are supposed gradually variable depending the height direction based on a simple power distribution law in terms of the volume fractions of the constituents; the two materials with which we worked are metals and ceramics. The core layer is taken homogeneous and made of an isotropic material; while the banks layers consist of FGM materials with a homogeneous fraction compared to the middle layer. Movement equations are obtained by the energy minimization principle. Analytical solutions of free vibration and buckling are obtained for sandwich beams under different support conditions; these conditions are taken into account by incorporating new form functions. In the end, illustrative examples are presented to show the effects of changes in different parameters such as (material graduation, the stretching effect of the thickness, boundary conditions and thickness ratio - length) on the vibration free and buckling of an FGM sandwich beams.

Keywords: functionally graded sandwich beam, refined shear deformation theory, stretching effect, free vibration

Procedia PDF Downloads 240
25866 The Exercise of Deliberative Democracy on Public Administrations Agencies' Decisions

Authors: Mauricio Filho, Carina Castro

Abstract:

The object of this project is to analyze long-time public agents that passed through several governments and see themselves in the position of having to deliberate with new agents, recently settled in the public administration. For theoretical ends, internal deliberation is understood as the one practiced on the public administration agencies, without any direct participation from the general public in the process. The assumption is: agents with longer periods of public service tend to step away from momentary political discussions that guide the current administration and seek to concentrate on institutionalized routines and procedures, making the most politically aligned individuals with the current government deliberate with less "passion" and more exchanging of knowledge and information. The theoretical framework of this research is institutionalism, which is guided by a more pragmatic view, facing the fluidity of reality in ways showing the multiple relations between agents and their respective institutions. The critical aspirations of this project rest on the works of professors Cass Sunstein, Adrian Vermeule, Philipp Pettit and in literature from both institutional theory and economic analysis of law, greatly influenced by the Chicago Law School. Methodologically, the paper is a theoretical review and pretends to be unfolded, in a future moment, in empirical tests for verification. This work has as its main analytical tool the appeal to theoretical and doctrinaire areas from the Juridical Sciences, by adopting the deductive and analytical method.

Keywords: institutions, state, law, agencies

Procedia PDF Downloads 256
25865 Passive Aeration of Wastewater: Analytical Model

Authors: Ayman M. El-Zahaby, Ahmed S. El-Gendy

Abstract:

Aeration for wastewater is essential for the proper operation of aerobic treatment units where the wastewater normally has zero dissolved oxygen. This is due to the need of oxygen by the aerobic microorganisms to grow and survive. Typical aeration units for wastewater treatment require electric energy for their operation such as mechanical aerators or diffused aerators. The passive units are units that operate without the need of electric energy such as cascade aerators, spray aerators and tray aerators. In contrary to the cascade aerators and spray aerators, tray aerators require much smaller area foot print for their installation as the treatment stages are arranged vertically. To the extent of the authors knowledge, the design of tray aerators for the aeration purpose has not been presented in the literature. The current research concerns with an analytical study for the design of tray aerators for the purpose of increasing the dissolved oxygen in wastewater treatment systems, including an investigation on different design parameters and their impact on the aeration efficiency. The studied aerator shall act as an intermediate stage between an anaerobic primary treatment unit and an aerobic treatment unit for small scale treatment systems. Different free falling flow regimes were investigated, and the thresholds for transition between regimes were obtained from the literature. The study focused on the jetting flow regime between trays. Starting from the two film theory, an equation that relates the dissolved oxygen concentration effluent from the system was derived as a function of the flow rate, number of trays, tray area, spacing between trays, number and diameter of holes and the water temperature. A MATLab ® model was developed for the derived equation. The expected aeration efficiency under different tray configurations and operating conditions were illustrated through running the model with varying the design parameters. The impact of each parameter was illustrated. The overall system efficiency was found to increase by decreasing the hole diameter. On the other side, increasing the number of trays, tray area, flow rate per hole or tray spacing had positive effect on the system efficiency.

Keywords: aeration, analytical, passive, wastewater

Procedia PDF Downloads 201
25864 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 165
25863 Structural Performance of Composite Steel and Concrete Beams

Authors: Jakub Bartus

Abstract:

In general, composite steel and concrete structures present an effective structural solution utilizing full potential of both materials. As they have a numerous advantages on the construction side, they can reduce greatly the overall cost of construction, which is the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behaviour having web openings but emphasizes the influence of these openings on the total strain distribution at the level of steel bottom flange as well. The major investigation was focused on a change of structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime object. The study is devided into analytical and numerical part. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered description of imposed structural issue in a form of a finite element model (FEM) using strut and shell elements accounting for material non-linearities. As an outcome, a number of conclusions were drawn describing and explaining an effect of web opening presence on the structural performance of composite beams.

Keywords: composite beam, web opening, steel flange, totalstrain, finite element analysis

Procedia PDF Downloads 61
25862 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process

Authors: A. Assad, T. Zayed

Abstract:

Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.

Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process

Procedia PDF Downloads 141
25861 Prevalence and Associated Factors with Burnout Among Secondary School Teachers in the City of Cotonou in Benin in 2022

Authors: Antoine Vikkey Hinson, Ranty Jolianelle Dassi, Menonli Adjobimey, Rose Mikponhoue, Paul Ayelo

Abstract:

Introduction: The psychological hardship of the teaching profession maintains a chronic stress that inevitably evolves into burnout (BO) in the absence of adequate preventive measures. The objective of this study is to study the prevalence and factors associated with burnout among secondary school teachers in the city of Cotonou in 2022. Methods: This was a descriptive cross-sectional study with an analytical aim and prospective data collection that took place over a period of 2 months, from July 19 to August 19 and from October 1 to October 31, 2022. Sampling was done using a three-stage probability sampling technique. Data analysis was performed using R 4.1.1 software. Bivariate logistic regression was used to identify associated factors. The significance level chosen was 5% (p < 0.05). Results: A total of 270 teachers were included in the study, of whom 208 (77.00%) were men. The mean age of the workers was 38.03 ± 8.30 years. According to the Maslach Burnout Inventory, 58.51% of the teachers had burnout, with 41.10% of teachers in emotional exhaustion, 27.40% in depersonalization and 21.90% in loss of personal accomplishment. The severity of the syndrome was low to moderate in almost all teachers. The occurrence of BO was associated with), not practicing sports (ORa= 2,38 [1,32; 4,28]), jobs training (ORa= 1,86 [1,04; 3,34]) and an imbalance of effort/reward (ORa= 5,98 [2,24;15,98]). Conclusion: The prevalence of BO is high among secondary school teachers in the city of Cotonou. A larger scale study, including research on its consequences on the teacher and the learner, is necessary in order to act quickly to implement a prevention program.

Keywords: burnout, teachers, Maslach burnout inventory, associated factors, Benin

Procedia PDF Downloads 70
25860 Comparison of Water Equivalent Ratio of Several Dosimetric Materials in Proton Therapy Using Monte Carlo Simulations and Experimental Data

Authors: M. R. Akbari , H. Yousefnia, E. Mirrezaei

Abstract:

Range uncertainties of protons are currently a topic of interest in proton therapy. Two of the parameters that are often used to specify proton range are water equivalent thickness (WET) and water equivalent ratio (WER). Since WER values for a specific material is nearly constant at different proton energies, it is a more useful parameter to compare. In this study, WER values were calculated for different proton energies in polymethyl methacrylate (PMMA), polystyrene (PS) and aluminum (Al) using FLUKA and TRIM codes. The results were compared with analytical, experimental and simulated SEICS code data obtained from the literature. In FLUKA simulation, a cylindrical phantom, 1000 mm in height and 300 mm in diameter, filled with the studied materials was simulated. A typical mono-energetic proton pencil beam in a wide range of incident energies usually applied in proton therapy (50 MeV to 225 MeV) impinges normally on the phantom. In order to obtain the WER values for the considered materials, cylindrical detectors, 1 mm in height and 20 mm in diameter, were also simulated along the beam trajectory in the phantom. In TRIM calculations, type of projectile, energy and angle of incidence, type of target material and thickness should be defined. The mode of 'detailed calculation with full damage cascades' was selected for proton transport in the target material. The biggest difference in WER values between the codes was 3.19%, 1.9% and 0.67% for Al, PMMA and PS, respectively. In Al and PMMA, the biggest difference between each code and experimental data was 1.08%, 1.26%, 2.55%, 0.94%, 0.77% and 0.95% for SEICS, FLUKA and SRIM, respectively. FLUKA and SEICS had the greatest agreement (≤0.77% difference in PMMA and ≤1.08% difference in Al, respectively) with the available experimental data in this study. It is concluded that, FLUKA and TRIM codes have capability for Bragg curves simulation and WER values calculation in the studied materials. They can also predict Bragg peak location and range of proton beams with acceptable accuracy.

Keywords: water equivalent ratio, dosimetric materials, proton therapy, Monte Carlo simulations

Procedia PDF Downloads 314
25859 Evaluation of Oxidative Changes in Soybean Oil During Shelf-Life by Physico-Chemical Methods and Headspace-Liquid Phase Microextraction (HS-LPME) Technique

Authors: Maryam Enteshari, Kooshan Nayebzadeh, Abdorreza Mohammadi

Abstract:

In this study, the oxidative stability of soybean oil under different storage temperatures (4 and 25˚C) and during 6-month shelf-life was investigated by various analytical methods and headspace-liquid phase microextraction (HS-LPME) coupled to gas chromatography-mass spectrometry (GC-MS). Oxidation changes were monitored by analytical parameters consisted of acid value (AV), peroxide value (PV), p-Anisidine value (p-AV), thiobarbituric acid value (TBA), fatty acids profile, iodine value (IV), and oxidative stability index (OSI). In addition, concentrations of hexanal and heptanal as secondary volatile oxidation compounds were determined by HS-LPME/GC-MS technique. Rate of oxidation in soybean oil which stored at 25˚C was so higher. The AV, p-AV, and TBA were gradually increased during 6 months while the amount of unsaturated fatty acids, IV, and OSI decreased. Other parameters included concentrations of both hexanal and heptanal, and PV exhibited increasing trend during primitive months of storage; then, at the end of third and fourth months a sudden decrement was understood for the concentrations of hexanal and heptanal and the amount of PV, simultaneously. The latter parameters increased again until the end of shelf-time. As a result, the temperature and time were effective factors in oxidative stability of soybean oil. Also intensive correlations were found for soybean oil at 4 ˚C between AV and TBA (r2=0.96), PV and p-AV (r2=0.9), IV and TBA (-r2=0.9), and for soybean oil stored at 4˚C between p-AV and TBA (r2=0.99).

Keywords: headspace-liquid phase microextraction, oxidation, shelf-life, soybean oil

Procedia PDF Downloads 396
25858 Evaluation of Nanoparticle Application to Control Formation Damage in Porous Media: Laboratory and Mathematical Modelling

Authors: Gabriel Malgaresi, Sara Borazjani, Hadi Madani, Pavel Bedrikovetsky

Abstract:

Suspension-Colloidal flow in porous media occurs in numerous engineering fields, such as industrial water treatment, the disposal of industrial wastes into aquifers with the propagation of contaminants and low salinity water injection into petroleum reservoirs. The main effects are particle mobilization and captured by the porous rock, which can cause pore plugging and permeability reduction which is known as formation damage. Various factors such as fluid salinity, pH, temperature, and rock properties affect particle detachment. Formation damage is unfavorable specifically near injection and production wells. One way to control formation damage is pre-treatment of the rock with nanoparticles. Adsorption of nanoparticles on fines and rock surfaces alters zeta-potential of the surfaces and enhances the attachment force between the rock and fine particles. The main objective of this study is to develop a two-stage mathematical model for (1) flow and adsorption of nanoparticles on the rock in the pre-treatment stage and (2) fines migration and permeability reduction during the water production after the pre-treatment. The model accounts for adsorption and desorption of nanoparticles, fines migration, and kinetics of particle capture. The system of equations allows for the exact solution. The non-self-similar wave-interaction problem was solved by the Method of Characteristics. The analytical model is new in two ways: First, it accounts for the specific boundary and initial condition describing the injection of nanoparticle and production from the pre-treated porous media; second, it contains the effect of nanoparticle sorption hysteresis. The derived analytical model contains explicit formulae for the concentration fronts along with pressure drop. The solution is used to determine the optimal injection concentration of nanoparticle to avoid formation damage. The mathematical model was validated via an innovative laboratory program. The laboratory study includes two sets of core-flood experiments: (1) production of water without nanoparticle pre-treatment; (2) pre-treatment of a similar core with nanoparticles followed by water production. Positively-charged Alumina nanoparticles with the average particle size of 100 nm were used for the rock pre-treatment. The core was saturated with the nanoparticles and then flushed with low salinity water; pressure drop across the core and the outlet fine concentration was monitored and used for model validation. The results of the analytical modeling showed a significant reduction in the fine outlet concentration and formation damage. This observation was in great agreement with the results of core-flood data. The exact solution accurately describes fines particle breakthroughs and evaluates the positive effect of nanoparticles in formation damage. We show that the adsorbed concentration of nanoparticle highly affects the permeability of the porous media. For the laboratory case presented, the reduction of permeability after 1 PVI production in the pre-treated scenario is 50% lower than the reference case. The main outcome of this study is to provide a validated mathematical model to evaluate the effect of nanoparticles on formation damage.

Keywords: nano-particles, formation damage, permeability, fines migration

Procedia PDF Downloads 612
25857 The Development of Nursing Model for Pregnant Women to Prevention of Early Postpartum Hemorrhage

Authors: Wadsana Sarakarn, Pimonpan Charoensri, Baliya Chaiyara

Abstract:

Objectives: To study the outcomes of the developed nursing model to prevent early postpartum hemorrhage (PPH). Materials and Methods: The analytical study was conducted in Sunpasitthiprasong Hospital during October 1st, 2015, until May 31st, 2017. After review the prevalence, risk factors, and outcomes of postpartum hemorrhage of the parturient who gave birth in Sunpasitthiprasong Hospital, the nursing model was developed under research regulation of Kemmis&McTaggart using 4 steps of operating procedures: 1) analyzing problem situation and gathering 2) creating the plan 3) noticing and performing 4) reflecting the result of the operation. The nursing model consisted of the screening tools for risk factors associated with PPH, the clinical nursing practice guideline (CNPG), and the collecting bag for measuring postpartum blood loss. Primary outcome was early postpartum hemorrhage. Secondary outcomes were postpartum hysterectomy, maternal mortality, personnel’s practice, knowledge, and satisfaction of the nursing model. The data were analyzed by using content analysis for qualitative data and descriptive statistics for quantitative data. Results: Before using the nursing model, the prevalence of early postpartum hemorrhage was under estimated (2.97%). There were 5 cases of postpartum hysterectomy and 2 cases of maternal death due to postpartum hemorrhage. During the study period, there was 22.7% prevalence of postpartum hemorrhage among 220 pregnant women who were vaginally delivered at Sunpasitthiprasong Hospital. No maternal death or postpartum hysterectomy was reported after using the nursing model. Among 16 registered nurses at the delivery room who evaluated using of the nursing model, they reported the high level of practice, knowledge, and satisfaction Conclusion: The nursing model for the prevention of early PPH is effective to decrease early PPH and other serious complications.

Keywords: the development of a nursing model, prevention of postpartum hemorrhage, pregnant women, postpartum hemorrhage

Procedia PDF Downloads 94
25856 The Economic Limitations of Defining Data Ownership Rights

Authors: Kacper Tomasz Kröber-Mulawa

Abstract:

This paper will address the topic of data ownership from an economic perspective, and examples of economic limitations of data property rights will be provided, which have been identified using methods and approaches of economic analysis of law. To properly build a background for the economic focus, in the beginning a short perspective of data and data ownership in the EU’s legal system will be provided. It will include a short introduction to its political and social importance and highlight relevant viewpoints. This will stress the importance of a Single Market for data but also far-reaching regulations of data governance and privacy (including the distinction of personal and non-personal data, data held by public bodies and private businesses). The main discussion of this paper will build upon the briefly referred to legal basis as well as methods and approaches of economic analysis of law.

Keywords: antitrust, data, data ownership, digital economy, property rights

Procedia PDF Downloads 74
25855 Protecting the Cloud Computing Data Through the Data Backups

Authors: Abdullah Alsaeed

Abstract:

Virtualized computing and cloud computing infrastructures are no longer fuzz or marketing term. They are a core reality in today’s corporate Information Technology (IT) organizations. Hence, developing an effective and efficient methodologies for data backup and data recovery is required more than any time. The purpose of data backup and recovery techniques are to assist the organizations to strategize the business continuity and disaster recovery approaches. In order to accomplish this strategic objective, a variety of mechanism were proposed in the recent years. This research paper will explore and examine the latest techniques and solutions to provide data backup and restoration for the cloud computing platforms.

Keywords: data backup, data recovery, cloud computing, business continuity, disaster recovery, cost-effective, data encryption.

Procedia PDF Downloads 80
25854 Missing Link Data Estimation with Recurrent Neural Network: An Application Using Speed Data of Daegu Metropolitan Area

Authors: JaeHwan Yang, Da-Woon Jeong, Seung-Young Kho, Dong-Kyu Kim

Abstract:

In terms of ITS, information on link characteristic is an essential factor for plan or operation. But in practical cases, not every link has installed sensors on it. The link that does not have data on it is called “Missing Link”. The purpose of this study is to impute data of these missing links. To get these data, this study applies the machine learning method. With the machine learning process, especially for the deep learning process, missing link data can be estimated from present link data. For deep learning process, this study uses “Recurrent Neural Network” to take time-series data of road. As input data, Dedicated Short-range Communications (DSRC) data of Dalgubul-daero of Daegu Metropolitan Area had been fed into the learning process. Neural Network structure has 17 links with present data as input, 2 hidden layers, for 1 missing link data. As a result, forecasted data of target link show about 94% of accuracy compared with actual data.

Keywords: data estimation, link data, machine learning, road network

Procedia PDF Downloads 507
25853 Tagging a corpus of Media Interviews with Diplomats: Challenges and Solutions

Authors: Roberta Facchinetti, Sara Corrizzato, Silvia Cavalieri

Abstract:

Increasing interconnection between data digitalization and linguistic investigation has given rise to unprecedented potentialities and challenges for corpus linguists, who need to master IT tools for data analysis and text processing, as well as to develop techniques for efficient and reliable annotation in specific mark-up languages that encode documents in a format that is both human and machine-readable. In the present paper, the challenges emerging from the compilation of a linguistic corpus will be taken into consideration, focusing on the English language in particular. To do so, the case study of the InterDiplo corpus will be illustrated. The corpus, currently under development at the University of Verona (Italy), represents a novelty in terms both of the data included and of the tag set used for its annotation. The corpus covers media interviews and debates with diplomats and international operators conversing in English with journalists who do not share the same lingua-cultural background as their interviewees. To date, this appears to be the first tagged corpus of international institutional spoken discourse and will be an important database not only for linguists interested in corpus analysis but also for experts operating in international relations. In the present paper, special attention will be dedicated to the structural mark-up, parts of speech annotation, and tagging of discursive traits, that are the innovational parts of the project being the result of a thorough study to find the best solution to suit the analytical needs of the data. Several aspects will be addressed, with special attention to the tagging of the speakers’ identity, the communicative events, and anthropophagic. Prominence will be given to the annotation of question/answer exchanges to investigate the interlocutors’ choices and how such choices impact communication. Indeed, the automated identification of questions, in relation to the expected answers, is functional to understand how interviewers elicit information as well as how interviewees provide their answers to fulfill their respective communicative aims. A detailed description of the aforementioned elements will be given using the InterDiplo-Covid19 pilot corpus. The data yielded by our preliminary analysis of the data will highlight the viable solutions found in the construction of the corpus in terms of XML conversion, metadata definition, tagging system, and discursive-pragmatic annotation to be included via Oxygen.

Keywords: spoken corpus, diplomats’ interviews, tagging system, discursive-pragmatic annotation, english linguistics

Procedia PDF Downloads 175
25852 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 227
25851 Customer Data Analysis Model Using Business Intelligence Tools in Telecommunication Companies

Authors: Monica Lia

Abstract:

This article presents a customer data analysis model using business intelligence tools for data modelling, transforming, data visualization and dynamic reports building. Economic organizational customer’s analysis is made based on the information from the transactional systems of the organization. The paper presents how to develop the data model starting for the data that companies have inside their own operational systems. The owned data can be transformed into useful information about customers using business intelligence tool. For a mature market, knowing the information inside the data and making forecast for strategic decision become more important. Business Intelligence tools are used in business organization as support for decision-making.

Keywords: customer analysis, business intelligence, data warehouse, data mining, decisions, self-service reports, interactive visual analysis, and dynamic dashboards, use cases diagram, process modelling, logical data model, data mart, ETL, star schema, OLAP, data universes

Procedia PDF Downloads 423
25850 Fruit and Vegetable Consumption in High School Students in Bandar Abbas, Iran: An Application of the Trans-Theoretical Model

Authors: Aghamolaei Teamur, Hosseini Zahra, Ghanbarnejad Amin

Abstract:

Introduction: A diet rich in fruits and vegetables, especially for adolescents is of a great importance due to the need for nutrients and the rapid growth of this age group. The aim of this study was to investigate the relationship between decisional balance and self-efficacy with stages of change for fruit and vegetable consumption in high school students in Bandar Abbas, Iran. Methods: In this descriptive-analytical study, the data were collected from 345 students studying in 8 high schools of Bandar Abbas were selected through multistage sampling. To collect data, separate questionnaires were designed for evaluating each of the variables including the stages of change, perceived benefits, perceived barriers, and self-efficacy of fruit and vegetable consumption. Decisional balance was estimated by subtracting the perceived benefits and barriers. The data were analyzed using SPSS19 and one-way ANOVA. Results: The results of this study indicated that individuals’ progress along the stages of change from pre-contemplation to maintenance level was associated with a significant increase in their decisional balance and self-efficacy for fruit and vegetable consumption. (P < 0.001). The lowest level of decisional balance and self-efficacy regarding for fruit showed up in the pre-contemplation stage, and the highest level of decisional balance and self-efficacy was in the maintenance stage. The same trends were observed in the case of vegetable consumption. Conclusion: Decisional balance and self-efficacy should be considered in designing interventions to increase consumption of fruits and vegetables. There needs to be more emphasis in educational programs based on the Trans-theoretical Model (TTM) on the enhancement of perceived benefits and elimination of perceived barriers regarding consumption of fruits and vegetables.

Keywords: fruit, vegetable, decision balance, self-efficacy, trans-theoretical model

Procedia PDF Downloads 289
25849 Pragmatic Strategies of Selected Online Articles on the Buhari/Jubril Dilemma

Authors: Oluwaseun Amusa

Abstract:

The online space has continued to be a platform for not only private and mundane discussions but also a tribune for voicing critical political and national opinions. Nigerians and the international community have employed the online media, as well as other media platforms to articulate their thoughts on the claims which favour possibilities of the demise of the incumbent president of Nigeria, President Muhammadu Buhari, after a prolonged illness in year 2007 and the ploy of a Jubril of Sudan clone in his place. This study thus examines the pragmatic strategies employed in the online articles on the national dilemma caused by the Buhari/Jubril claims and refutals, in response to the lacuna in the literature on such analytical investigations on the subject. Two online articles titled, 'Buhari: The real, the fake and the dead' and 'Taking the Buhari/Jubril story seriously', authored by two Nigerian writers, Tunde Odesola and Abimbola Adelakun, respectively and retrieved online from 360nobs.com and Nairaland blogs, on December 3, 2018, and December 7, 2018, respectively, served as data for the study. The data were analysed using the Stance Theory and the Pragmatic Act Theory. Findings showed that the writers employed stance acts, rhetorical questions, metaphors, histo-political allusions, name-calling, and derogatives, in achieving the pragmeme of disabusing. This results in a pragmatic reconstruction of readers' views on the issue.

Keywords: Buhari/Jubril claims, online articles, pragmatic strategies, stance theory

Procedia PDF Downloads 141
25848 Numerical Solution Speedup of the Laplace Equation Using FPGA Hardware

Authors: Abbas Ebrahimi, Mohammad Zandsalimy

Abstract:

The main purpose of this study is to investigate the feasibility of using FPGA (Field Programmable Gate Arrays) chips as alternatives for the conventional CPUs to accelerate the numerical solution of the Laplace equation. FPGA is an integrated circuit that contains an array of logic blocks, and its architecture can be reprogrammed and reconfigured after manufacturing. Complex circuits for various applications can be designed and implemented using FPGA hardware. The reconfigurable hardware used in this paper is an SoC (System on a Chip) FPGA type that integrates both microprocessor and FPGA architectures into a single device. In the present study the Laplace equation is implemented and solved numerically on both reconfigurable hardware and CPU. The precision of results and speedups of the calculations are compared together. The computational process on FPGA, is up to 20 times faster than a conventional CPU, with the same data precision. An analytical solution is used to validate the results.

Keywords: accelerating numerical solutions, CFD, FPGA, hardware definition language, numerical solutions, reconfigurable hardware

Procedia PDF Downloads 375
25847 Analysis of the Behavior of the Structure Under Internal Anfo Explosion

Authors: Seung-Min Ko, Seung-Jai Choi, Gun Jung, Jang-Ho Jay Kim

Abstract:

Although extensive explosion-related research has been performed in the past several decades, almost no research has focused on internal blasts. However, internal blast research is needed to understand about the behavior of a containment structure or building under internal blast loading, as in the case of the Chornobyl and Fukushima nuclear accidents. Therefore, the internal blast study concentrated on RC and PSC structures is performed. The test data obtained from reinforced concrete (RC) and prestressed concrete (PSC) tubular structures applied with an internal explosion using ammonium nitrate/fuel oil (ANFO) charge are used to assess their deformation resistance and ultimate failure load based on the structural stiffness change under various charge weight. For the internal blast charge weight, ANFO explosive charge weights of 15.88, 20.41, 22.68 and 24.95 kg were selected for the RC tubular structures, and 22.68, 24.95, 27.22, 29.48, and 31.75 kg were selected for PSC tubular structures, which were detonated at the center of cross section at the mid-span with a standoff distance of 1,000mm to the inner wall surface. Then, the test data were used to predict the internal charge weight required to fail a real scale reinforced concrete containment vessels (RCCV) and prestressed concrete containment vessel (PCCV). Then, the analytical results based on the experimental data were derived using the simple assumptions of the models, and another approach using the stiffness, deformation and explosion weight relationship was used to formulate a general method for analyzing internal blasted tubular structures. A model of the internal explosion of a steel tube was used as an example for validation. The proposed method can be used generically, using factors according to the material characteristics of the target structures. The results of the study are discussed in detail in the paper.

Keywords: internal blast, reinforced concrete, RCCV, PCCV, stiffness, blast safety

Procedia PDF Downloads 68
25846 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas

Authors: Michel Soto Chalhoub

Abstract:

Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.

Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra

Procedia PDF Downloads 225
25845 Opening up Government Datasets for Big Data Analysis to Support Policy Decisions

Authors: K. Hardy, A. Maurushat

Abstract:

Policy makers are increasingly looking to make evidence-based decisions. Evidence-based decisions have historically used rigorous methodologies of empirical studies by research institutes, as well as less reliable immediate survey/polls often with limited sample sizes. As we move into the era of Big Data analytics, policy makers are looking to different methodologies to deliver reliable empirics in real-time. The question is not why did these people do this for the last 10 years, but why are these people doing this now, and if the this is undesirable, and how can we have an impact to promote change immediately. Big data analytics rely heavily on government data that has been released in to the public domain. The open data movement promises greater productivity and more efficient delivery of services; however, Australian government agencies remain reluctant to release their data to the general public. This paper considers the barriers to releasing government data as open data, and how these barriers might be overcome.

Keywords: big data, open data, productivity, data governance

Procedia PDF Downloads 361
25844 On Influence of Web Openings Presence on Structural Performance of Steel and Concrete Beams

Authors: Jakub Bartus, Jaroslav Odrobinak

Abstract:

In general, composite steel and concrete structures present an effective structural solution utilizing the full potential of both materials. As they have numerous advantages on the construction side, they can greatly reduce the overall cost of construction, which has been the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behavior having web openings but emphasizes the influence of these openings on the total strain distribution at the level of the steel bottom flange as well. The major investigation was focused on a change in structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime objective. The study is divided into analytical and numerical parts. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered the discretization of the preset structural issue in the form of a finite element (FE) model using beam and shell elements accounting for material non–linearities. As an outcome, several conclusions were drawn describing and explaining the effect of web opening presence on the structural performance of composite beams.

Keywords: beam, steel flange, total strain, web opening

Procedia PDF Downloads 69
25843 Nonlinear Modelling of Sloshing Waves and Solitary Waves in Shallow Basins

Authors: Mohammad R. Jalali, Mohammad M. Jalali

Abstract:

The earliest theories of sloshing waves and solitary waves based on potential theory idealisations and irrotational flow have been extended to be applicable to more realistic domains. To this end, the computational fluid dynamics (CFD) methods are widely used. Three-dimensional CFD methods such as Navier-Stokes solvers with volume of fluid treatment of the free surface and Navier-Stokes solvers with mappings of the free surface inherently impose high computational expense; therefore, considerable effort has gone into developing depth-averaged approaches. Examples of such approaches include Green–Naghdi (GN) equations. In Cartesian system, GN velocity profile depends on horizontal directions, x-direction and y-direction. The effect of vertical direction (z-direction) is also taken into consideration by applying weighting function in approximation. GN theory considers the effect of vertical acceleration and the consequent non-hydrostatic pressure. Moreover, in GN theory, the flow is rotational. The present study illustrates the application of GN equations to propagation of sloshing waves and solitary waves. For this purpose, GN equations solver is verified for the benchmark tests of Gaussian hump sloshing and solitary wave propagation in shallow basins. Analysis of the free surface sloshing of even harmonic components of an initial Gaussian hump demonstrates that the GN model gives predictions in satisfactory agreement with the linear analytical solutions. Discrepancies between the GN predictions and the linear analytical solutions arise from the effect of wave nonlinearities arising from the wave amplitude itself and wave-wave interactions. Numerically predicted solitary wave propagation indicates that the GN model produces simulations in good agreement with the analytical solution of the linearised wave theory. Comparison between the GN model numerical prediction and the result from perturbation analysis confirms that nonlinear interaction between solitary wave and a solid wall is satisfactorilly modelled. Moreover, solitary wave propagation at an angle to the x-axis and the interaction of solitary waves with each other are conducted to validate the developed model.

Keywords: Green–Naghdi equations, nonlinearity, numerical prediction, sloshing waves, solitary waves

Procedia PDF Downloads 278
25842 A Review on Existing Challenges of Data Mining and Future Research Perspectives

Authors: Hema Bhardwaj, D. Srinivasa Rao

Abstract:

Technology for analysing, processing, and extracting meaningful data from enormous and complicated datasets can be termed as "big data." The technique of big data mining and big data analysis is extremely helpful for business movements such as making decisions, building organisational plans, researching the market efficiently, improving sales, etc., because typical management tools cannot handle such complicated datasets. Special computational and statistical issues, such as measurement errors, noise accumulation, spurious correlation, and storage and scalability limitations, are brought on by big data. These unique problems call for new computational and statistical paradigms. This research paper offers an overview of the literature on big data mining, its process, along with problems and difficulties, with a focus on the unique characteristics of big data. Organizations have several difficulties when undertaking data mining, which has an impact on their decision-making. Every day, terabytes of data are produced, yet only around 1% of that data is really analyzed. The idea of the mining and analysis of data and knowledge discovery techniques that have recently been created with practical application systems is presented in this study. This article's conclusion also includes a list of issues and difficulties for further research in the area. The report discusses the management's main big data and data mining challenges.

Keywords: big data, data mining, data analysis, knowledge discovery techniques, data mining challenges

Procedia PDF Downloads 103
25841 Experimental and CFD of Desgined Small Wind Turbine

Authors: Tarek A. Mekail, Walid M. A. Elmagid

Abstract:

Many researches have concentrated on improving the aerodynamic performance of wind turbine blade through testing and theoretical studies. A small wind turbine blade is designed, fabricated and tested. The power performance of small horizontal axis wind turbines is simulated in details using Computational Fluid Dynamic (CFD). The three-dimensional CFD models are presented using ANSYS-CFX v13 software for predicting the performance of a small horizontal axis wind turbine. The simulation results are compared with the experimental data measured from a small wind turbine model, which designed according to a vehicle-based test system. The analysis of wake effect and aerodynamic of the blade can be carried out when the rotational effect was simulated. Finally, comparison between experimental, numerical and analytical performance has been done. The comparison is fairly good.

Keywords: small wind turbine, CFD of wind turbine, CFD, performance of wind turbine, test of small wind turbine, wind turbine aerodynamic, 3D model

Procedia PDF Downloads 534
25840 Design of Built-Spaces and Enhanced Psychological Wellbeing by Limiting Effect of SBS: An Analytical Study across Students in Indian Universities

Authors: Sadaf H. Khan, Jyoti Kumar

Abstract:

Sick Building Syndrome (SBS) is a situation in which inhabitants of a building develop illness symptoms or get infected with a chronic disease as a result of the building in which they reside or work. Certain symptoms tend to get more severe as an individual spends more time in the building; however, they generally improve with time or even disappear when they leave that space. Though ‘Design of Built-Spaces’ is a crucial factor in regulating these symptoms but it still needs to be identified further as to what specific design features of a ‘Built-Space’ trigger sick building syndrome (SBS). Much of the research work present to date is focused on the physiological or physical sickness caused due to inappropriate built-space design. In this paper, the psychological aspects of sick building syndrome (SBS) will be investigated across the adult population, more specifically graduate students in India trying to settle in back to their previous physical work environments, i.e., campus, classrooms, hostels, after a very long hold which lasted more than a year due to lockdowns during Covid-19 crisis all over the world. The study will follow an analytical approach and the data will be collected through self-reported online surveys. The purpose of this study is to enquire causal agents, diagnosable symptoms and remedial design of built spaces which can enhance the productive level of built environments and better facilitate the inhabitants by improving their psychological wellbeing, which is the most uprising concern. The fact that SBS symptoms can be studied only within the initial few weeks as an occupant starts interacting with a built-environment and leaves as the occupant leaves that space or zone, the post-lockdown incoming of students back to their respective campuses provides an opportunity to clearly draw multiple conclusions of the relationship that exist between the Design of Built-Spaces and Psychological Sickness Syndrome associated with it. The study will be one of a kind approach for understanding and formulating methods to improve psychological wellbeing within a built-setting by better identifying factors associated with these psychological symptoms, including anxiety, mental fatigue, reduced attention span and reduced memory span as refined symptoms of SBS discussed in 1987 by Molhave within his study.

Keywords: built-environment psychology, built-space design, healthcare architecture, psychological wellbeing

Procedia PDF Downloads 165
25839 A Systematic Review on Challenges in Big Data Environment

Authors: Rimmy Yadav, Anmol Preet Kaur

Abstract:

Big Data has demonstrated the vast potential in streamlining, deciding, spotting business drifts in different fields, for example, producing, fund, Information Technology. This paper gives a multi-disciplinary diagram of the research issues in enormous information and its procedures, instruments, and system identified with the privacy, data storage management, network and energy utilization, adaptation to non-critical failure and information representations. Other than this, result difficulties and openings accessible in this Big Data platform have made.

Keywords: big data, privacy, data management, network and energy consumption

Procedia PDF Downloads 305
25838 Building a Model for Information Literacy Education in School Settings

Authors: Tibor Koltay

Abstract:

Among varied new literacies, information literacy is not only the best-known one but displays numerous models and frameworks. Nonetheless, there is still a lack of its complex theoretical model that could be applied to information literacy education in public (K12) education, which often makes use of constructivist approaches. This paper aims to present the main features of such a model. To develop a complex model, the literature and practice of phenomenographic and sociocultural theories, as well as discourse analytical approaches to information literacy, have been reviewed. Besides these constructivist and expressive based educational approaches, the new model is intended to include the innovation of coupling them with a cognitive model that takes developing informational and operational knowledge into account. The convergences between different literacies (information literacy, media literacy, media and information literacy, and data literacy) were taken into account, as well. The model will also make use of a three-country survey that examined secondary school teachers’ attitudes to information literacy. The results of this survey show that only a part of the respondents feel properly prepared to teach information literacy courses, and think that they can teach information literacy skills by themselves, while they see a librarian as an expert in educating information literacy. The use of the resulting model is not restricted to enhancing theory. It is meant to raise the level of awareness about information literacy and related literacies, and the next phase of the model’s development will be a pilot study that verifies the usefulness of the methodology for practical information literacy education in selected Hungarian secondary schools.

Keywords: communication, data literacy, discourse analysis, information literacy education, media and information literacy media literacy, phenomenography, public education, sociocultural theory

Procedia PDF Downloads 139