Search results for: validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1375

Search results for: validation

355 Optimization of the Energy Consumption of the Pottery Kilns by the Use of Heat Exchanger as Recovery System and Modeling of Heat Transfer by Conduction Through the Walls of the Furnace

Authors: Maha Bakakri, Rachid Tadili, Fatiha Lemmini

Abstract:

Morocco is one of the few countries that have kept their traditional crafts, despite the competition of modern industry and its impact on manual labor. Therefore the optimization of energy consumption becomes an obligation and this is the purpose of this document. In this work we present some characteristics of the furnace studied, its operating principle and the experimental measurements of the evolutions of the temperatures inside and outside the walls of the furnace, values which will be used later in the calculation of its thermal losses. In order to determine the major source of the thermal losses of the furnace we have established the heat balance of the furnace. The energy consumed, the useful energy and the thermal losses through the walls and the chimney of the furnace are calculated thanks to the experimental measurements which we realized for several firings. The results show that the energy consumption of this type of furnace is very high and that the main source of energy loss is mainly due to the heat losses of the combustion gases that escape from the furnace by the chimney while the losses through the walls are relatively small. it have opted for energy recovery as a solution where we can recover some of the heat lost through the use of a heat exchanger system using a double tube introduced into the flue gas exhaust stack compartment. The study on the heat recovery system is presented and the heat balance inside the exchanger is established. In this paper we also present the numerical modeling of heat transfer by conduction through the walls of the furnace. A numerical model has been established based on the finite volume method and the double scan method. It makes it possible to determine the temperature profile of the furnace and thus to calculate the thermal losses of its walls and to deduce the thermal losses due to the combustion gases. Validation of the model is done using the experimental measurements carried out on the furnace. The results obtained in this work, relating to the energy consumed during the operation of the furnace are important and are part of the energy efficiency framework that has become a key element in global energy policies. It is the fastest and cheapest way to solve energy, environmental and economic security problems.

Keywords: energy cunsumption, energy recovery, modeling, energy eficiency

Procedia PDF Downloads 73
354 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion

Procedia PDF Downloads 197
353 Utilizing Spatial Uncertainty of On-The-Go Measurements to Design Adaptive Sampling of Soil Electrical Conductivity in a Rice Field

Authors: Ismaila Olabisi Ogundiji, Hakeem Mayowa Olujide, Qasim Usamot

Abstract:

The main reasons for site-specific management for agricultural inputs are to increase the profitability of crop production, to protect the environment and to improve products’ quality. Information about the variability of different soil attributes within a field is highly essential for the decision-making process. Lack of fast and accurate acquisition of soil characteristics remains one of the biggest limitations of precision agriculture due to being expensive and time-consuming. Adaptive sampling has been proven as an accurate and affordable sampling technique for planning within a field for site-specific management of agricultural inputs. This study employed spatial uncertainty of soil apparent electrical conductivity (ECa) estimates to identify adaptive re-survey areas in the field. The original dataset was grouped into validation and calibration groups where the calibration group was sub-grouped into three sets of different measurements pass intervals. A conditional simulation was performed on the field ECa to evaluate the ECa spatial uncertainty estimates by the use of the geostatistical technique. The grouping of high-uncertainty areas for each set was done using image segmentation in MATLAB, then, high and low area value-separate was identified. Finally, an adaptive re-survey was carried out on those areas of high-uncertainty. Adding adaptive re-surveying significantly minimized the time required for resampling whole field and resulted in ECa with minimal error. For the most spacious transect, the root mean square error (RMSE) yielded from an initial crude sampling survey was minimized after an adaptive re-survey, which was close to that value of the ECa yielded with an all-field re-survey. The estimated sampling time for the adaptive re-survey was found to be 45% lesser than that of all-field re-survey. The results indicate that designing adaptive sampling through spatial uncertainty models significantly mitigates sampling cost, and there was still conformity in the accuracy of the observations.

Keywords: soil electrical conductivity, adaptive sampling, conditional simulation, spatial uncertainty, site-specific management

Procedia PDF Downloads 132
352 Asset Liability Modelling for Pension Funds by Introducing Leslie Model for Population Dynamics

Authors: Kristina Sutiene, Lina Dapkute

Abstract:

The paper investigates the current demographic trends that exert the sustainability of pension systems in most EU regions. Several drivers usually compose the demographic challenge, coming from the structure and trends of population in the country. As the case of research, three main variables of demographic risk in Lithuania have been singled out and have been used in making up the analysis. Over the last two decades, the country has presented a peculiar demographic situation characterized by pessimistic fertility trends, negative net migration rate and rising life expectancy that make the significant changes in labor-age population. This study, therefore, sets out to assess the relative impact of these risk factors both individually and in aggregate, while assuming economic trends to evolve historically. The evidence is presented using data of pension funds that operate in Lithuania and are financed by defined-contribution plans. To achieve this goal, the discrete-time pension fund’s value model is developed that reflects main operational modalities: contribution income from current participants and new entrants, pension disbursement and administrative expenses; it also fluctuates based on returns from investment activity. Age-structured Leslie population dynamics model has been integrated into the main model to describe the dynamics of fertility, migration and mortality rates upon age. Validation has concluded that Leslie model adequately fits the current population trends in Lithuania. The elasticity of pension system is examined using Loimaranta efficiency as a measure for comparison of plausible long-term developments of demographic risks. With respect to the research question, it was found that demographic risks have different levels of influence on future value of aggregated pension funds: The fertility rates have the highest importance, while mortality rates give only a minor impact. Further studies regarding the role of trying out different economic scenarios in the integrated model would be worthwhile.

Keywords: asset liability modelling, Leslie model, pension funds, population dynamics

Procedia PDF Downloads 269
351 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 87
350 Alternative Ways of Knowing and the Construction of a Department Around a Common Critical Lens

Authors: Natalie Delia

Abstract:

This academic paper investigates the transformative potential of incorporating alternative ways of knowing within the framework of Critical Studies departments. Traditional academic paradigms often prioritize empirical evidence and established methodologies, potentially limiting the scope of critical inquiry. In response to this, our research seeks to illuminate the benefits and challenges associated with integrating alternative epistemologies, such as indigenous knowledge systems, artistic expressions, and experiential narratives. Drawing upon a comprehensive review of literature and case studies, we examine how alternative ways of knowing can enrich and diversify the intellectual landscape of Critical Studies departments. By embracing perspectives that extend beyond conventional boundaries, departments may foster a more inclusive and holistic understanding of critical issues. Additionally, we explore the potential impact on pedagogical approaches, suggesting that alternative ways of knowing can stimulate alternative way of teaching methods and enhance student engagement. Our investigation also delves into the institutional and cultural shifts necessary to support the integration of alternative epistemologies within academic settings. We address concerns related to validation, legitimacy, and the potential clash with established norms, offering insights into fostering an environment that encourages intellectual pluralism. Furthermore, the paper considers the implications for interdisciplinary collaboration and the potential for cultivating a more responsive and socially engaged scholarship. By encouraging a synthesis of diverse perspectives, Critical Studies departments may be better equipped to address the complexities of contemporary issues, encouraging a dynamic and evolving field of study. In conclusion, this paper advocates for a paradigm shift within Critical Studies departments towards a more inclusive and expansive approach to knowledge production. By embracing alternative ways of knowing, departments have the opportunity to not only diversify their intellectual landscape but also to contribute meaningfully to broader societal dialogues, addressing pressing issues with renewed depth and insight.

Keywords: critical studies, alternative ways of knowing, academic department, Wallerstein

Procedia PDF Downloads 72
349 Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions

Authors: G. A. Khalid, M. D. Jones, R. Prabhu, A. Mason-Jones, W. Whittington, H. Bakhtiarydavijani, P. S. Theobald

Abstract:

Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.

Keywords: finite element analysis, impact simulation, infant head trauma, material properties, post mortem human subjects

Procedia PDF Downloads 326
348 Performance Evaluation of Production Schedules Based on Process Mining

Authors: Kwan Hee Han

Abstract:

External environment of enterprise is rapidly changing majorly by global competition, cost reduction pressures, and new technology. In these situations, production scheduling function plays a critical role to meet customer requirements and to attain the goal of operational efficiency. It deals with short-term decision making in the production process of the whole supply chain. The major task of production scheduling is to seek a balance between customer orders and limited resources. In manufacturing companies, this task is so difficult because it should efficiently utilize resource capacity under the careful consideration of many interacting constraints. At present, many computerized software solutions have been utilized in many enterprises to generate a realistic production schedule to overcome the complexity of schedule generation. However, most production scheduling systems do not provide sufficient information about the validity of the generated schedule except limited statistics. Process mining only recently emerged as a sub-discipline of both data mining and business process management. Process mining techniques enable the useful analysis of a wide variety of processes such as process discovery, conformance checking, and bottleneck analysis. In this study, the performance of generated production schedule is evaluated by mining event log data of production scheduling software system by using the process mining techniques since every software system generates event logs for the further use such as security investigation, auditing and error bugging. An application of process mining approach is proposed for the validation of the goodness of production schedule generated by scheduling software systems in this study. By using process mining techniques, major evaluation criteria such as utilization of workstation, existence of bottleneck workstations, critical process route patterns, and work load balance of each machine over time are measured, and finally, the goodness of production schedule is evaluated. By using the proposed process mining approach for evaluating the performance of generated production schedule, the quality of production schedule of manufacturing enterprises can be improved.

Keywords: data mining, event log, process mining, production scheduling

Procedia PDF Downloads 279
347 Computer Simulation Approach in the 3D Printing Operations of Surimi Paste

Authors: Timilehin Martins Oyinloye, Won Byong Yoon

Abstract:

Simulation technology is being adopted in many industries, with research focusing on the development of new ways in which technology becomes embedded within production, services, and society in general. 3D printing (3DP) technology is fast developing in the food industry. However, the limited processability of high-performance material restricts the robustness of the process in some cases. Significantly, the printability of materials becomes the foundation for extrusion-based 3DP, with residual stress being a major challenge in the printing of complex geometry. In many situations, the trial-a-error method is being used to determine the optimum printing condition, which results in time and resource wastage. In this report, the analysis of 3 moisture levels for surimi paste was investigated for an optimum 3DP material and printing conditions by probing its rheology, flow characteristics in the nozzle, and post-deposition process using the finite element method (FEM) model. Rheological tests revealed that surimi pastes with 82% moisture are suitable for 3DP. According to the FEM model, decreasing the nozzle diameter from 1.2 mm to 0.6 mm, increased the die swell from 9.8% to 14.1%. The die swell ratio increased due to an increase in the pressure gradient (1.15107 Pa to 7.80107 Pa) at the nozzle exit. The nozzle diameter influenced the fluid properties, i.e., the shear rate, velocity, and pressure in the flow field, as well as the residual stress and the deformation of the printed sample, according to FEM simulation. The post-printing stability of the model was investigated using the additive layer manufacturing (ALM) model. The ALM simulation revealed that the residual stress and total deformation of the sample were dependent on the nozzle diameter. A small nozzle diameter (0.6 mm) resulted in a greater total deformation (0.023), particularly at the top part of the model, which eventually resulted in the sample collapsing. As the nozzle diameter increased, the accuracy of the model improved until the optimum nozzle size (1.0 mm). Validation with 3D-printed surimi products confirmed that the nozzle diameter was a key parameter affecting the geometry accuracy of 3DP of surimi paste.

Keywords: 3D printing, deformation analysis, die swell, numerical simulation, surimi paste

Procedia PDF Downloads 68
346 Developing the Principal Change Leadership Non-Technical Competencies Scale: An Exploratory Factor Analysis

Authors: Tai Mei Kin, Omar Abdull Kareem

Abstract:

In light of globalization, educational reform has become a top priority for many countries. However, the task of leading change effectively requires a multidimensional set of competencies. Over the past two decades, technical competencies of principal change leadership have been extensively analysed and discussed. Comparatively, little research has been conducted in Malaysian education context on non-technical competencies or popularly known as emotional intelligence, which is equally crucial for the success of change. This article provides a validation of the Principal Change Leadership Non-Technical Competencies (PCLnTC) Scale, a tool that practitioners can easily use to assess school principals’ level of change leadership non-technical competencies that facilitate change and maximize change effectiveness. The overall coherence of the PCLnTC model was constructed by incorporating three theories: a)the change leadership theory whereby leading change is the fundamental role of a leader; b)competency theory in which leadership can be taught and learned; and c)the concept of emotional intelligence whereby it can be developed, fostered and taught. An exploratory factor analysis (EFA) was used to determine the underlying factor structure of PCLnTC model. Before conducting EFA, five important pilot test approaches were conducted to ensure the validity and reliability of the instrument: a)reviewed by academic colleagues; b)verification and comments from panel; c)evaluation on questionnaire format, syntax, design, and completion time; d)evaluation of item clarity; and e)assessment of internal consistency reliability. A total of 335 teachers from 12 High Performing Secondary School in Malaysia completed the survey. The PCLnTCS with six points Liker-type scale were subjected to Principal Components Analysis. The analysis yielded a three-factor solution namely, a)Interpersonal Sensitivity; b)Flexibility; and c)Motivation, explaining a total 74.326 per cent of the variance. Based on the results, implications for instrument revisions are discussed and specifications for future confirmatory factor analysis are delineated.

Keywords: exploratory factor analysis, principal change leadership non-technical competencies (PCLnTC), interpersonal sensitivity, flexibility, motivation

Procedia PDF Downloads 425
345 Effect of Blood Sugar Levels on Short Term and Working Memory Status in Type 2 Diabetics

Authors: Mythri G., Manjunath ML, Girish Babu M., Shireen Swaliha Quadri

Abstract:

Background: The increase in diabetes among the elderly is of concern because in addition to the wide range of traditional diabetes complications, evidence has been growing that diabetes is associated with increased risk of cognitive decline. Aims and Objectives: To find out if there is any association between blood sugar levels and short-term and working memory status in patients of type 2 diabetes. Materials and Methods: The study was carried out in 200 individuals aged between 40-65 years consisting of 100 diagnosed cases of Type 2 Diabetes Mellitus and 100 non-diabetics from OPD of Mc Gann Hospital, Shivamogga. Rye’s Auditory Verbal Learning Test, Verbal Fluency Test and Visual Reproduction Test, Working Digit Span Test and Validation Span Test were used to assess short-term and working memory. Fasting and Post Prandial blood sugar levels were estimated. Statistical analysis was done using SPSS 21. Results: Memory test scores of type 2 diabetics were significantly reduced (p < 0.001) when compared to the memory scores of age and gender matched non-diabetics. Fasting blood sugar levels were found to have a negative correlation with memory scores for all 5 tests: AVLT (r=-0.837), VFT (r=-0.888), VRT(r=-0.787), WDST (r=-0.795) and VST (r=-0.943). Post- Prandial blood sugar levels were found to have a negative correlation with memory scores for all 5 tests: AVLT (r=-0.922), VFT (r=-0.848), VRT(r=-0.707),WDST (r=-0.729) and VST (r=-0.880) Memory scores in all 5 tests were found to be negatively correlated with the FBS and PPBS levels in diabetic patients (p < 0.001). Conclusion: The decreased memory status in diabetic patients may be due to many factors like hyperglycemia, vascular disease, insulin resistance, amyloid deposition and also some of the factor combine to produce additive effects like, type of diabetes, co-morbidities, age of onset, duration of the disease and type of therapy. These observed effects of blood sugar levels of diabetics on memory status are of potential clinical importance because even mild cognitive impairment could interfere with todays’ activities.

Keywords: diabetes, cognition, diabetes, HRV, respiratory medicine

Procedia PDF Downloads 282
344 An Introduction to the Concept of Environmental Audit: Indian Context

Authors: Pradip Kumar Das

Abstract:

Phenomenal growth of population and industry exploits the environment in varied ways. Consequently, the greenhouse effect and other allied problems are threatening mankind the world over. Protection and up gradation of environment have, therefore, become the prime necessity all of mankind for the sustainable development of environment. People in humbler walks of life including the corporate citizens have become aware of the impacts of environmental pollution. Governments of various nations have entered the picture with laws and regulations to correct and cure the effects of present and past violations of environmental practices and to obstruct future violations of good environmental disciplines. In this perspective, environmental audit directs verification and validation to ensure that the various environmental laws are complied with and adequate care has been taken towards environmental protection and preservation. The discipline of environmental audit has experienced expressive development throughout the world. It examines the positive and negative effects of the activities of an enterprise on environment and provides an in-depth study of the company processes any growth in realizing long-term strategic goals. Environmental audit helps corporations assess its achievement, correct deficiencies and reduce risk to the health and improving safety. Environmental audit being a strong management tool should be administered by industry for its own self-assessment. Developed countries all over the globe have gone ahead in environment quantification; but unfortunately, there is a lack of awareness about pollution and environmental hazards among the common people in India. In the light of this situation, the conceptual analysis of this study is concerned with the rationale of environmental audit on the industry and the society as a whole and highlights the emerging dimensions in the auditing theory and practices. A modest attempt has been made to throw light on the recent development in environmental audit in developing nations like India and the problems associated with the implementation of environmental audit. The conceptual study also reflects that despite different obstacles, environmental audit is becoming an increasing aspect within the corporate sectors in India and lastly, conclusions along with suggestions have been offered to improve the current scenario.

Keywords: environmental audit, environmental hazards, environmental laws, environmental protection, environmental preservation

Procedia PDF Downloads 272
343 Development and Validation of a Liquid Chromatographic Method for the Quantification of Related Substance in Gentamicin Drug Substances

Authors: Sofiqul Islam, V. Murugan, Prema Kumari, Hari

Abstract:

Gentamicin is a broad spectrum water-soluble aminoglycoside antibiotics produced by the fermentation process of microorganism known as Micromonospora purpurea. It is widely used for the treatment of infection caused by both gram positive and gram negative bacteria. Gentamicin consists of a mixture of aminoglycoside components like C1, C1a, C2a, and C2. The molecular structure of Gentamicin and its related substances showed that it has lack of presence of chromophore group in the molecule due to which the detection of such components were quite critical and challenging. In this study, a simple Reversed Phase-High Performance Liquid Chromatographic (RP-HPLC) method using ultraviolet (UV) detector was developed and validated for quantification of the related substances present in Gentamicin drug substances. The method was achieved by using Thermo Scientific Hypersil Gold analytical column (150 x 4.6 mm, 5 µm particle size) with isocratic elution composed of methanol: water: glacial acetic acid: sodium hexane sulfonate in the ratio 70:25:5:3 % v/v/v/w as a mobile phase at a flow rate of 0.5 mL/min, column temperature was maintained at 30 °C and detection wavelength of 330 nm. The four components of Gentamicin namely Gentamicin C1, C1a, C2a, and C2 were well separated along with the related substance present in Gentamicin. The Limit of Quantification (LOQ) values were found to be at 0.0075 mg/mL. The accuracy of the method was quite satisfactory in which the % recovery was resulted between 95-105% for the related substances. The correlation coefficient (≥ 0.995) shows the linearity response against concentration over the range of Limit of Quantification (LOQ). Precision studies showed the % Relative Standard Deviation (RSD) values less than 5% for its related substance. The method was validated in accordance with the International Conference of Harmonization (ICH) guideline with various parameters like system suitability, specificity, precision, linearity, accuracy, limit of quantification, and robustness. This proposed method was easy and suitable for use for the quantification of related substances in routine analysis of Gentamicin formulations.

Keywords: reversed phase-high performance liquid chromatographic (RP-HPLC), high performance liquid chromatography, gentamicin, isocratic, ultraviolet

Procedia PDF Downloads 161
342 Insight into Localized Fertilizer Placement in Major Cereal Crops

Authors: Solomon Yokamo, Dianjun Lu, Xiaoqin Chen, Huoyan Wang

Abstract:

The current ‘high input-high output’ nutrient management model based on homogenous spreading over the entire soil surface remains a key challenge in China’s farming systems, leading to low fertilizer use efficiency and environmental pollution. Localized placement of fertilizer (LPF) to crop root zones has been proposed as a viable approach to boost crop production while protecting environmental pollution. To assess the potential benefits of LPF on three major crops—wheat, rice, and maize—a comprehensive meta-analysis was conducted, encompassing 85 field studies published from 2002-2023. We further validated the practicability and feasibility of one-time root zone N management based on LPF for the three field crops. The meta-analysis revealed that LPF significantly increased the yields of the selected crops (13.62%) and nitrogen recovery efficiency (REN) (33.09%) while reducing cumulative nitrous oxide (N₂O) emission (17.37%) and ammonia (NH₃) volatilization (60.14%) compared to the conventional surface application (CSA). Higher grain yield and REN were achieved with an optimal fertilization depth (FD) of 5-15 cm, moderate N rates, combined NPK application, one-time deep fertilization, and coarse-textured and slightly acidic soils. Field validation experiments showed that localized one-time root zone N management without topdressing increased maize (6.2%), rice (34.6%), and wheat (2.9%) yields while saving N fertilizer (3%) and also increased the net economic benefits (23.71%) compared to CSA. A soil incubation study further proved the potential of LPF to enhance the retention and availability of mineral N in the root zone over an extended period. Thus, LPF could be an important fertilizer management strategy and should be extended to other less-developed and developing regions to win the triple benefit of food security, environmental quality, and economic gains.

Keywords: grain yield, LPF, NH₃ volatilization, N₂O emission, N recovery efficiency

Procedia PDF Downloads 19
341 Hierarchical Operation Strategies for Grid Connected Building Microgrid with Energy Storage and Photovoltatic Source

Authors: Seon-Ho Yoon, Jin-Young Choi, Dong-Jun Won

Abstract:

This paper presents hierarchical operation strategies which are minimizing operation error between day ahead operation plan and real time operation. Operating power systems between centralized and decentralized approaches can be represented as hierarchical control scheme, featured as primary control, secondary control and tertiary control. Primary control is known as local control, featuring fast response. Secondary control is referred to as microgrid Energy Management System (EMS). Tertiary control is responsible of coordinating the operations of multi-microgrids. In this paper, we formulated 3 stage microgrid operation strategies which are similar to hierarchical control scheme. First stage is to set a day ahead scheduled output power of Battery Energy Storage System (BESS) which is only controllable source in microgrid and it is optimized to minimize cost of exchanged power with main grid using Particle Swarm Optimization (PSO) method. Second stage is to control the active and reactive power of BESS to be operated in day ahead scheduled plan in case that State of Charge (SOC) error occurs between real time and scheduled plan. The third is rescheduling the system when the predicted error is over the limited value. The first stage can be compared with the secondary control in that it adjusts the active power. The second stage is comparable to the primary control in that it controls the error in local manner. The third stage is compared with the secondary control in that it manages power balancing. The proposed strategies will be applied to one of the buildings in Electronics and Telecommunication Research Institute (ETRI). The building microgrid is composed of Photovoltaic (PV) generation, BESS and load and it will be interconnected with the main grid. Main purpose of that is minimizing operation cost and to be operated in scheduled plan. Simulation results support validation of proposed strategies.

Keywords: Battery Energy Storage System (BESS), Energy Management System (EMS), Microgrid (MG), Particle Swarm Optimization (PSO)

Procedia PDF Downloads 248
340 Effects of Rising Cost of Building Materials in Nigeria: A Case Study of Adamawa State

Authors: Ibrahim Yerima Gwalem, Jamila Ahmed Buhari

Abstract:

In recent years, there has been an alarming rate of increase in the costs of building materials in Nigeria, and this ugly phenomenon threatens the contributions of the construction industry in national development. The purpose of this study was to assess the effects of the rising cost of building materials in Adamawa State Nigeria. Four research questions in line with the purpose of the study were raised to guide the study. Two null hypotheses were formulated and tested at 0.05 level of significance. The study adopted a survey research design. The population of the study comprises registered contractors, registered builders, selected merchants, and consultants in Adamawa state. Data were collected using researcher designed instrument tagged effects of the rising cost of building materials questionnaire (ERCBMQ). The instrument was subjected to face and content validation by two experts, one from Modibbo Adama University of Technology Yola and the other from Federal Polytechnic Mubi. The reliability of the instrument was determined by the Cronbach Alpha method and yielded a reliability index of 0.85 high enough to ascertain the reliability. Data collected from a field survey of 2019 was analyzed using mean and percentage. The means of the prices were used in the calculations of price indices and rates of inflation on building materials. Findings revealed that factors responsible for the rising cost of building materials are the exchange rate of the Nigeria Naira with a mean rating (MR) = 4.4; cost of fuel and power supply, MR = 4.3; and changes in government policies and legislation, MR = 4.2, while fluctuations in the construction cost with MR = 2.8; reduced volume of construction output, MR = 2.52; and risk of project abandonment, MRA = 2.51, were the three effects. The study concluded that adverse effects could result in a downward effect on the contributions of the construction industries on the gross domestic product (GDP) in the nation’s economy. Among the recommendations proffered include that the government should formulate a policy that will play down the agitations on the use of imported building materials by encouraging research in the production of local building materials.

Keywords: effects, rising, cost, building, materials

Procedia PDF Downloads 139
339 Development and Validation of Cylindrical Linear Oscillating Generator

Authors: Sungin Jeong

Abstract:

This paper presents a linear oscillating generator of cylindrical type for hybrid electric vehicle application. The focus of the study is the suggestion of the optimal model and the design rule of the cylindrical linear oscillating generator with permanent magnet in the back-iron translator. The cylindrical topology is achieved using equivalent magnetic circuit considering leakage elements as initial modeling. This topology with permanent magnet in the back-iron translator is described by number of phases and displacement of stroke. For more accurate analysis of an oscillating machine, it will be compared by moving just one-pole pitch forward and backward the thrust of single-phase system and three-phase system. Through the analysis and comparison, a single-phase system of cylindrical topology as the optimal topology is selected. Finally, the detailed design of the optimal topology takes the magnetic saturation effects into account by finite element analysis. Besides, the losses are examined to obtain more accurate results; copper loss in the conductors of machine windings, eddy-current loss of permanent magnet, and iron-loss of specific material of electrical steel. The considerations of thermal performances and mechanical robustness are essential, because they have an effect on the entire efficiency and the insulations of the machine due to the losses of the high temperature generated in each region of the generator. Besides electric machine with linear oscillating movement requires a support system that can resist dynamic forces and mechanical masses. As a result, the fatigue analysis of shaft is achieved by the kinetic equations. Also, the thermal characteristics are analyzed by the operating frequency in each region. The results of this study will give a very important design rule in the design of linear oscillating machines. It enables us to more accurate machine design and more accurate prediction of machine performances.

Keywords: equivalent magnetic circuit, finite element analysis, hybrid electric vehicle, linear oscillating generator

Procedia PDF Downloads 195
338 Fault-Tolerant Control Study and Classification: Case Study of a Hydraulic-Press Model Simulated in Real-Time

Authors: Jorge Rodriguez-Guerra, Carlos Calleja, Aron Pujana, Iker Elorza, Ana Maria Macarulla

Abstract:

Society demands more reliable manufacturing processes capable of producing high quality products in shorter production cycles. New control algorithms have been studied to satisfy this paradigm, in which Fault-Tolerant Control (FTC) plays a significant role. It is suitable to detect, isolate and adapt a system when a harmful or faulty situation appears. In this paper, a general overview about FTC characteristics are exposed; highlighting the properties a system must ensure to be considered faultless. In addition, a research to identify which are the main FTC techniques and a classification based on their characteristics is presented in two main groups: Active Fault-Tolerant Controllers (AFTCs) and Passive Fault-Tolerant Controllers (PFTCs). AFTC encompasses the techniques capable of re-configuring the process control algorithm after the fault has been detected, while PFTC comprehends the algorithms robust enough to bypass the fault without further modifications. The mentioned re-configuration requires two stages, one focused on detection, isolation and identification of the fault source and the other one in charge of re-designing the control algorithm by two approaches: fault accommodation and control re-design. From the algorithms studied, one has been selected and applied to a case study based on an industrial hydraulic-press. The developed model has been embedded under a real-time validation platform, which allows testing the FTC algorithms and analyse how the system will respond when a fault arises in similar conditions as a machine will have on factory. One AFTC approach has been picked up as the methodology the system will follow in the fault recovery process. In a first instance, the fault will be detected, isolated and identified by means of a neural network. In a second instance, the control algorithm will be re-configured to overcome the fault and continue working without human interaction.

Keywords: fault-tolerant control, electro-hydraulic actuator, fault detection and isolation, control re-design, real-time

Procedia PDF Downloads 177
337 Development of Wide Bandgap Semiconductor Based Particle Detector

Authors: Rupa Jeena, Pankaj Chetry, Pradeep Sarin

Abstract:

The study of fundamental particles and the forces governing them has always remained an attractive field of theoretical study to pursue. With the advancement and development of new technologies and instruments, it is possible now to perform particle physics experiments on a large scale for the validation of theoretical predictions. These experiments are generally carried out in a highly intense beam environment. This, in turn, requires the development of a detector prototype possessing properties like radiation tolerance, thermal stability, and fast timing response. Semiconductors like Silicon, Germanium, Diamond, and Gallium Nitride (GaN) have been widely used for particle detection applications. Silicon and germanium being narrow bandgap semiconductors, require pre-cooling to suppress the effect of noise by thermally generated intrinsic charge carriers. The application of diamond in large-scale experiments is rare owing to its high cost of fabrication, while GaN is one of the most extensively explored potential candidates. But we are aiming to introduce another wide bandgap semiconductor in this active area of research by considering all the requirements. We have made an attempt by utilizing the wide bandgap of rutile Titanium dioxide (TiO2) and other properties to use it for particle detection purposes. The thermal evaporation-oxidation (in PID furnace) technique is used for the deposition of the film, and the Metal Semiconductor Metal (MSM) electrical contacts are made using Titanium+Gold (Ti+Au) (20/80nm). The characterization comprising X-Ray Diffraction (XRD), Atomic Force Microscopy (AFM), Ultraviolet (UV)-Visible spectroscopy, and Laser Raman Spectroscopy (LRS) has been performed on the film to get detailed information about surface morphology. On the other hand, electrical characterizations like Current Voltage (IV) measurement in dark and light and test with laser are performed to have a better understanding of the working of the detector prototype. All these preliminary tests of the detector will be presented.

Keywords: particle detector, rutile titanium dioxide, thermal evaporation, wide bandgap semiconductors

Procedia PDF Downloads 79
336 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite

Authors: Georgios Koronis, Arlindo Silva

Abstract:

This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.

Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites

Procedia PDF Downloads 204
335 Neural Networks Models for Measuring Hotel Users Satisfaction

Authors: Asma Ameur, Dhafer Malouche

Abstract:

Nowadays, user comments on the Internet have an important impact on hotel bookings. This confirms that the e-reputation issue can influence the likelihood of customer loyalty to a hotel. In this way, e-reputation has become a real differentiator between hotels. For this reason, we have a unique opportunity in the opinion mining field to analyze the comments. In fact, this field provides the possibility of extracting information related to the polarity of user reviews. This sentimental study (Opinion Mining) represents a new line of research for analyzing the unstructured textual data. Knowing the score of e-reputation helps the hotelier to better manage his marketing strategy. The score we then obtain is translated into the image of hotels to differentiate between them. Therefore, this present research highlights the importance of hotel satisfaction ‘scoring. To calculate the satisfaction score, the sentimental analysis can be manipulated by several techniques of machine learning. In fact, this study treats the extracted textual data by using the Artificial Neural Networks Approach (ANNs). In this context, we adopt the aforementioned technique to extract information from the comments available in the ‘Trip Advisor’ website. This actual paper details the description and the modeling of the ANNs approach for the scoring of online hotel reviews. In summary, the validation of this used method provides a significant model for hotel sentiment analysis. So, it provides the possibility to determine precisely the polarity of the hotel users reviews. The empirical results show that the ANNs are an accurate approach for sentiment analysis. The obtained results show also that this proposed approach serves to the dimensionality reduction for textual data’ clustering. Thus, this study provides researchers with a useful exploration of this technique. Finally, we outline guidelines for future research in the hotel e-reputation field as comparing the ANNs with other technique.

Keywords: clustering, consumer behavior, data mining, e-reputation, machine learning, neural network, online hotel ‘reviews, opinion mining, scoring

Procedia PDF Downloads 136
334 In vitro Modeling of Aniridia-Related Keratopathy by the Use of Crispr/Cas9 on Limbal Epithelial Cells and Rescue

Authors: Daniel Aberdam

Abstract:

Haploinsufficiency of PAX6 in humans is the main cause of congenital aniridia, a rare eye disease characterized by reduced visual acuity. Patients have also progressive disorders including cataract, glaucoma and corneal abnormalities making their condition very challenging to manage. Aniridia-related keratopathy (ARK), caused by a combination of factors including limbal stem-cell deficiency, impaired healing response, abnormal differentiation, and infiltration of conjunctival cells onto the corneal surface, affects up to 95% of patients. It usually begins in the first decade of life resulting in recurrent corneal erosions, sub-epithelial fibrosis with corneal decompensation and opacification. Unfortunately, current treatment options for aniridia patients are currently limited. Although animal models partially recapitulate this disease, there is no in vitro cellular model of AKT needed for drug/therapeutic tools screening and validation. We used genome editing (CRISPR/Cas9 technology) to introduce a nonsense mutation found in patients into one allele of the PAX6 gene into limbal stem cells. Resulting mutated clones, expressing half of the amount of PAX6 protein and thus representative of haploinsufficiency were further characterized. Sequencing analysis showed that no off-target mutations were induced. The mutated cells displayed reduced cell proliferation and cell migration but enhanced cell adhesion. Known PAX6 targets expression was also reduced. Remarkably, addition of soluble recombinant PAX6 protein into the culture medium was sufficient to activate endogenous PAX6 gene and, as a consequence, rescue the phenotype. It strongly suggests that our in vitro model recapitulates well the epithelial defect and becomes a powerful tool to identify drugs that could rescue the corneal defect in patients. Furthermore, we demonstrate that the homeotic transcription factor Pax6 is able to be uptake naturally by recipient cells to function into the nucleus.

Keywords: Pax6, crispr/cas9, limbal stem cells, aniridia, gene therapy

Procedia PDF Downloads 207
333 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery

Authors: Forouzan Salehi Fergeni

Abstract:

Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.

Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine

Procedia PDF Downloads 50
332 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 296
331 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 114
330 An Assessment of Impact of Financial Statement Fraud on Profit Performance of Manufacturing Firms in Nigeria: A Study of Food and Beverage Firms in Nigeria

Authors: Wale Agbaje

Abstract:

The aim of this research study is to assess the impact of financial statement fraud on profitability of some selected Nigerian manufacturing firms covering (2002-2016). The specific objectives focused on to ascertain the effect of incorrect asset valuation on return on assets (ROA) and to ascertain the relationship between improper expense recognition and return on assets (ROA). To achieve these objectives, descriptive research design was used for the study while secondary data were collected from the financial reports of the selected firms and website of security and exchange commission. The analysis of covariance (ANCOVA) was used and STATA II econometric method was used in the analysis of the data. Altman model and operating expenses ratio was adopted in the analysis of the financial reports to create a dummy variable for the selected firms from 2002-2016 and validation of the parameters were ascertained using various statistical techniques such as t-test, co-efficient of determination (R2), F-statistics and Wald chi-square. Two hypotheses were formulated and tested using the t-statistics at 5% level of significance. The findings of the analysis revealed that there is a significant relationship between financial statement fraud and profitability in Nigerian manufacturing industry. It was revealed that incorrect assets valuation has a significant positive relationship and so also is the improper expense recognition on return on assets (ROA) which serves as a proxy for profitability. The implication of this is that distortion of asset valuation and expense recognition leads to decreasing profit in the long run in the manufacturing industry. The study therefore recommended that pragmatic policy options need to be taken in the manufacturing industry to effectively manage incorrect asset valuation and improper expense recognition in order to enhance manufacturing industry performance in the country and also stemming of financial statement fraud should be adequately inculcated into the internal control system of manufacturing firms for the effective running of the manufacturing industry in Nigeria.

Keywords: Althman's Model, improper expense recognition, incorrect asset valuation, return on assets

Procedia PDF Downloads 161
329 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 166
328 Development of a Tilt-Rotor Aircraft Model Using System Identification Technique

Authors: Ferdinando Montemari, Antonio Vitale, Nicola Genito, Giovanni Cuciniello

Abstract:

The introduction of tilt-rotor aircraft into the existing civilian air transportation system will provide beneficial effects due to tilt-rotor capability to combine the characteristics of a helicopter and a fixed-wing aircraft into one vehicle. The disposability of reliable tilt-rotor simulation models supports the development of such vehicle. Indeed, simulation models are required to design automatic control systems that increase safety, reduce pilot's workload and stress, and ensure the optimal aircraft configuration with respect to flight envelope limits, especially during the most critical flight phases such as conversion from helicopter to aircraft mode and vice versa. This article presents a process to build a simplified tilt-rotor simulation model, derived from the analysis of flight data. The model aims to reproduce the complex dynamics of tilt-rotor during the in-flight conversion phase. It uses a set of scheduled linear transfer functions to relate the autopilot reference inputs to the most relevant rigid body state variables. The model also computes information about the rotor flapping dynamics, which are useful to evaluate the aircraft control margin in terms of rotor collective and cyclic commands. The rotor flapping model is derived through a mixed theoretical-empirical approach, which includes physical analytical equations (applicable to helicopter configuration) and parametric corrective functions. The latter are introduced to best fit the actual rotor behavior and balance the differences existing between helicopter and tilt-rotor during flight. Time-domain system identification from flight data is exploited to optimize the model structure and to estimate the model parameters. The presented model-building process was applied to simulated flight data of the ERICA Tilt-Rotor, generated by using a high fidelity simulation model implemented in FlightLab environment. The validation of the obtained model was very satisfying, confirming the validity of the proposed approach.

Keywords: flapping dynamics, flight dynamics, system identification, tilt-rotor modeling and simulation

Procedia PDF Downloads 199
327 Development and Validation of the Circular Economy Scale

Authors: Yu Fang Chen, Jeng Fung Hung

Abstract:

This study aimed to develop a circular economy scale to assess the level of recognition among high-level executives in businesses regarding the circular economy. The circular economy is crucial for global ESG sustainable development and poses a challenge for corporate social responsibility. The aim of promoting the circular economy is to reduce resource consumption, move towards sustainable development, reduce environmental impact, maintain ecological balance, increase economic value, and promote employment. This study developed a 23-item Circular Economy Scale, which includes three subscales: "Understanding of Circular Economy by Enterprises" (8 items), "Attitudes" (9 items), and "Behaviors" (6 items). The Likert 5-point scale was used to measure responses, with higher scores indicating higher levels of agreement among senior executives with regard to the circular economy. The study tested 105 senior executives and used a structural equation model (SEM) as a measurement indicator to determine the extent to which potential variables were measured. The standard factor loading of the measurement indicator needs to be higher than 0.7, and the average variance explained (AVE) represents the index of convergent validity, which should be greater than 0.5 or at least 0.45 to be acceptable. Out of the 23 items, 12 did not meet the standard, so they were removed, leaving 5 items, 3 items, and 3 items for each of the three subscales, respectively, all with a factor loading greater than 0.7. The AVE for all three subscales was greater than 0.45, indicating good construct validity. The Cronbach's α reliability values for the three subscales were 0.887, 0.787, and 0.734, respectively, and the total scale was 0.860, all of which were higher than 0.7, indicating good reliability. The Circular Economy Scale developed in this study measures three conceptual components that align with the theoretical framework of the literature review and demonstrate good reliability and validity. It can serve as a measurement tool for evaluating the degree of acceptance of the circular economy among senior executives in enterprises. In the future, this scale can be used by senior executives in enterprises as an evaluation tool to further explore its impact on sustainable development and to promote circular economy and sustainable development based on the reference provided.

Keywords: circular economy, corporate social responsibility, scale development, structural equation model

Procedia PDF Downloads 83
326 Psychometric Properties of Several New Positive Psychology Measures

Authors: Lauren Benyo Linford, Jared Warren, Jeremy Bekker, Gus Salazar

Abstract:

In order to accurately identify areas needing improvement and track growth, the availability of valid and reliable measures of different facets of well-being is vital. Because no specific measures currently exist for many facets of well-being, the purpose of this study was to construct and validate measures of the following constructs: Purpose, Values, Mindfulness, Savoring, Gratitude, Optimism, Supportive Relationships, Interconnectedness, Compassion, Community, Contribution, Engaged Living, Personal Growth, Flow Experiences, Self-Compassion, Exercise, Meditation, and an overall measure of subjective well-being—the Survey on Flourishing. In order to assess their psychometric properties, each measure was examined for internal consistency estimates, and items with poor item-test correlations were dropped. Additionally, the convergent validity of the Survey on Flourishing (SURF) was assessed. Total score correlations of SURF and other commonly used measures of well-being such as the Positive and Negative Affect Schedule (PANAS), The Satisfaction with Life Scale (SWLS), the PERMA Profiler (measure of Positive Emotion, Engagement, Relationships, Meaning, and Achievement) were examined to establish convergent validity. The Kessler Psychological distress scale (K6) was also included to determine the divergent validity of the SURF measure. Three week test-retest reliability was also assessed for the SURF measure. Additionally, normative data from general population samples was collected for both the Self-Compassion and Survey on Flourishing (SURF) measures. The purpose of this study is to introduce each of these measures, divulge the psychometric findings of this study, as well as explore additional psychometric properties of the SURF measure in particular. This study will highlight how these measures can be used in future research exploring these positive psychology constructs. Additionally, this study will discuss the utility of these measures to guide individuals in their use of the online self-directed, self-administered My Best Self 101 positive psychology resources developed by the researchers. The goal of My Best Self 101 is to disseminate real, research-based measures and tools to individuals who are seeking to increase their well-being.

Keywords: measurement, psychometrics, test validation, well-Being

Procedia PDF Downloads 188