Search results for: wind park model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18280

Search results for: wind park model

8890 Numerical Investigation into Capture Efficiency of Fibrous Filters

Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard

Abstract:

Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.

Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory

Procedia PDF Downloads 207
8889 The Effect of Satisfaction with the Internet on Online Shopping Attitude With TAM Approach Controlled By Gender

Authors: Velly Anatasia

Abstract:

In the last few decades extensive research has been conducted into information technology (IT) adoption, testing a series of factors considered to be essential for improved diffusion. Some studies analyze IT characteristics such as usefulness, ease of use and/or security, others focus on the emotions and experiences of users and a third group attempts to determine the importance of socioeconomic user characteristics such as gender, educational level and income. The situation is similar regarding e-commerce, where the majority of studies have taken for granted the importance of including these variables when studying e-commerce adoption, as these were believed to explain or forecast who buys or who will buy on the internet. Nowadays, the internet has become a marketplace suitable for all ages and incomes and both genders and thus the prejudices linked to the advisability of selling certain products should be revised. The objective of this study is to test whether the socioeconomic characteristics of experienced e-shoppers such as gender rally moderate the effect of their perceptions of online shopping behavior. Current development of the online environment and the experience acquired by individuals from previous e-purchases can attenuate or even nullify the effect of these characteristics. The individuals analyzed are experienced e-shoppers i.e. individuals who often make purchases on the internet. The Technology Acceptance Model (TAM) was broadened to include previous use of the internet and perceived self-efficacy. The perceptions and behavior of e-shoppers are based on their own experiences. The information obtained will be tested using questionnaires which were distributed and self-administered to respondent accustomed using internet. The causal model is estimated using structural equation modeling techniques (SEM), followed by tests of the moderating effect of socioeconomic variables on perceptions and online shopping behavior. The expected findings of this study indicated that gender moderate neither the influence of previous use of the internet nor the perceptions of e-commerce. In short, they do not condition the behavior of the experienced e-shopper.

Keywords: Internet shopping, age groups, gender, income, electronic commerce

Procedia PDF Downloads 337
8888 Reality Shock Affecting the Motivation to Work of New Flight Attendants: An Exploratory Qualitative Study of Flight Attendants Who Left Their Jobs Early

Authors: Hiromi Takafuji

Abstract:

Flight attendant:FA is one of popular occupation, especially in Asian countries, and the decision to be hired is made after clearing a high multiplier. On the other hand, immediately after joining the company, they experience unique stress due to the fact that the organization requires them to perform security and customer service duties in a highly specialized and limited space and time. As a result, despite the high level of difficulty in joining the company, many new recruits retire early at a high rate. It is commonly said that 30% of new graduates leave the company within three years in Japan and speculated that Reality Shock:RS is one of the causes of this. RS is that newcomers experience refers to the stress caused by the difference between pre-employment expectations and post-employment reality. The purpose of this study was to elucidate the mechanism by which the expertise required of new FA and the expectation of expertise held by each of them cause reality shock, which affects motivation and the decision to leave. This study identified the professionalism required of new FA and the impact of that expectation for professionalism on RS through an exploratory study of the experiences and psychological processes of FA who left within three years. Semi-structured in-depth interviews were conducted with five FA who left a major Japanese airline at an early stage, and their experiences were categorized, integrated, and classified by qualitative content analysis. They were chosen under a number of controlled conditions. Then two major findings emerged: first, that pre-employment expectations defining RS were hierarchical, and second, that training amplified expectations of professionalism, which strongly influenced early turnover. From these, this study generated a model of RS generative process model of FA that expectations are hierarchical and influential. This could contribute to the prevention of mental health deterioration by reality shock among new FA.

Keywords: reality shock, flight attendant, early turnover, qualitative study

Procedia PDF Downloads 82
8887 Seismic Response of Viscoelastic Dampers for Steel Structures

Authors: Ali Khoshraftar, S. A. Hashemi

Abstract:

This paper is focused on the advantages of Viscoelastic Dampers (VED) to be used as energy-absorbing devices in buildings. The properties of VED are briefly described. The analytical studies of the model structures exhibiting the structural response reduction due to these viscoelastic devices are presented. Computer simulation of the damped response of a multi-storey steel frame structure shows significant reduction in floor displacement levels.

Keywords: dampers, seismic evaluation, steel frames, viscoelastic

Procedia PDF Downloads 484
8886 A Hybrid Image Fusion Model for Generating High Spatial-Temporal-Spectral Resolution Data Using OLI-MODIS-Hyperion Satellite Imagery

Authors: Yongquan Zhao, Bo Huang

Abstract:

Spatial, Temporal, and Spectral Resolution (STSR) are three key characteristics of Earth observation satellite sensors; however, any single satellite sensor cannot provide Earth observations with high STSR simultaneously because of the hardware technology limitations of satellite sensors. On the other hand, a conflicting circumstance is that the demand for high STSR has been growing with the remote sensing application development. Although image fusion technology provides a feasible means to overcome the limitations of the current Earth observation data, the current fusion technologies cannot enhance all STSR simultaneously and provide high enough resolution improvement level. This study proposes a Hybrid Spatial-Temporal-Spectral image Fusion Model (HSTSFM) to generate synthetic satellite data with high STSR simultaneously, which blends the high spatial resolution from the panchromatic image of Landsat-8 Operational Land Imager (OLI), the high temporal resolution from the multi-spectral image of Moderate Resolution Imaging Spectroradiometer (MODIS), and the high spectral resolution from the hyper-spectral image of Hyperion to produce high STSR images. The proposed HSTSFM contains three fusion modules: (1) spatial-spectral image fusion; (2) spatial-temporal image fusion; (3) temporal-spectral image fusion. A set of test data with both phenological and land cover type changes in Beijing suburb area, China is adopted to demonstrate the performance of the proposed method. The experimental results indicate that HSTSFM can produce fused image that has good spatial and spectral fidelity to the reference image, which means it has the potential to generate synthetic data to support the studies that require high STSR satellite imagery.

Keywords: hybrid spatial-temporal-spectral fusion, high resolution synthetic imagery, least square regression, sparse representation, spectral transformation

Procedia PDF Downloads 235
8885 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction

Authors: Pontus Backstrom

Abstract:

In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.

Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling

Procedia PDF Downloads 134
8884 Exploring the Role of Hydrogen to Achieve the Italian Decarbonization Targets using an OpenScience Energy System Optimization Model

Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi

Abstract:

Hydrogen is expected to become an undisputed player in the ecological transition throughout the next decades. The decarbonization potential offered by this energy vector provides various opportunities for the so-called “hard-to-abate” sectors, including industrial production of iron and steel, glass, refineries and the heavy-duty transport. In this regard, Italy, in the framework of decarbonization plans for the whole European Union, has been considering a wider use of hydrogen to provide an alternative to fossil fuels in hard-to-abate sectors. This work aims to assess and compare different options concerning the pathway to be followed in the development of the future Italian energy system in order to meet decarbonization targets as established by the Paris Agreement and by the European Green Deal, and to infer a techno-economic analysis of the required asset alternatives to be used in that perspective. To accomplish this objective, the Energy System Optimization Model TEMOA-Italy is used, based on the open-source platform TEMOA and developed at PoliTo as a tool to be used for technology assessment and energy scenario analysis. The adopted assessment strategy includes two different scenarios to be compared with a business-as-usual one, which considers the application of current policies in a time horizon up to 2050. The studied scenarios are based on the up-to-date hydrogen-related targets and planned investments included in the National Hydrogen Strategy and in the Italian National Recovery and Resilience Plan, with the purpose of providing a critical assessment of what they propose. One scenario imposes decarbonization objectives for the years 2030, 2040 and 2050, without any other specific target. The second one (inspired to the national objectives on the development of the sector) promotes the deployment of the hydrogen value-chain. These scenarios provide feedback about the applications hydrogen could have in the Italian energy system, including transport, industry and synfuels production. Furthermore, the decarbonization scenario where hydrogen production is not imposed, will make use of this energy vector as well, showing the necessity of its exploitation in order to meet pledged targets by 2050. The distance of the planned policies from the optimal conditions for the achievement of Italian objectives is be clarified, revealing possible improvements of various steps of the decarbonization pathway, which seems to have as a fundamental element Carbon Capture and Utilization technologies for its accomplishment. In line with the European Commission open science guidelines, the transparency and the robustness of the presented results is ensured by the adoption of the open-source open-data model such as the TEMOA-Italy.

Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA

Procedia PDF Downloads 73
8883 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region

Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho

Abstract:

The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.

Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon

Procedia PDF Downloads 66
8882 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 302
8881 Determinant Factor of Farm Household Fruit Tree Planting: The Case of Habru Woreda, North Wollo

Authors: Getamesay Kassaye Dimru

Abstract:

The cultivation of fruit tree in degraded areas has two-fold importance. Firstly, it improves food availability and income, and secondly, it promotes the conservation of soil and water improving, in turn, the productivity of the land. The main objectives of this study are to identify the determinant of farmer's fruit trees plantation decision and to major fruit production challenges and opportunities of the study area. The analysis was made using primary data collected from 60 sample household selected randomly from the study area in 2016. The primary data was supplemented by data collected from a key informant. In addition to the descriptive statistics and statistical tests (Chi-square test and t-test), a logit model was employed to identify the determinant of fruit tree plantation decision. Drought, pest incidence, land degradation, lack of input, lack of capital and irrigation schemes maintenance, lack of misuse of irrigation water and limited agricultural personnel are the major production constraints identified. The opportunities that need to further exploited are better access to irrigation, main road access, endowment of preferred guava variety, experience of farmers, and proximity of the study area to research center. The result of logit model shows that from different factors hypothesized to determine fruit tree plantation decision, age of the household head accesses to market and perception of farmers about fruits' disease and pest resistance are found to be significant. The result has revealed important implications for the promotion of fruit production for both land degradation control and rehabilitation and increasing the livelihood of farming households.

Keywords: degradation, fruit, irrigation, pest

Procedia PDF Downloads 236
8880 Evaluation of Anti-Arthritic Activity of Eulophia ochreata Lindl and Zingiber cassumunar Roxb in Freund's Complete Adjuvant Induced Arthritic Rat Model

Authors: Akshada Amit Koparde, Candrakant S. Magdum

Abstract:

Objective: To investigate the anti-arthritic activity of chloroform extract and Isolate 1 of Eulophia ochreata Lindl and dichloromethane extract and Isolate 2 of Zingiber cassumunar Roxb in adjuvant arthritic (AA) rat model induced by Freund’s complete adjuvant (FCA). Methods: Forty two healthy albino rats were selected and randomly divided into six groups. Freund’s complete adjuvant (FCA) was used to induce arthritis and then treated with chloroform extract, isolate 1 and dichloromethane extract, isolate 2 for 28 days. The various parameters like paw volume, haematological parameters (RBC, WBC, Hb and ESR), were studied. Structural elucidation of active constituents isolate 1 and isolate 2 from Eulophia ochreata Lindl and Zingiber cassumunar Roxb will be done using GCMS and H1NMR. Results: In FCA induced arthritic rats, there was significant increase in rat paw volume whereas chloroform extract and Isolate 1 of Eulophia ochreata Lindl and dichloromethane extract and Isolate 2 of Zingiber cassumunar Roxb treated groups showed strong significant reduction in paw volume. The altered haematological parameters in the arthritic rats were significantly recovered to near normal by the treatment with extracts at the dose of 200 mg/kg. Further histopathological studies revealed the anti-arthritic activity of Eulophia ochreata Lindl and Zingiber cassumunar Roxb by preventing cartilage and bone destruction of the arthritic joints of AA rats. Conclusion: Extracts and isolates of Eulophia ochreata Lindl and Zingiber cassumunar Roxb have shown anti-arthritic activity. Decrease in paw volume and normalization of haematological abnormalities in adjuvant induced arthritic rats is significantly seen in the experiment. Further histopathological studies confirmed the anti-arthritic activity of Eulophia ochreata Lindl and Zingiber cassumunar Roxb.

Keywords: arthritis, Eulophia ochreata Lindl, Freund's complete adjuvant, paw volume, Zingiber cassumunar Roxb

Procedia PDF Downloads 176
8879 Validation of the Formula for Air Attenuation Coefficient for Acoustic Scale Models

Authors: Katarzyna Baruch, Agata Szelag, Aleksandra Majchrzak, Tadeusz Kamisinski

Abstract:

Methodology of measurement of sound absorption coefficient in scaled models is based on the ISO 354 standard. The measurement is realised indirectly - the coefficient is calculated from the reverberation time of an empty chamber as well as a chamber with an inserted sample. It is crucial to maintain the atmospheric conditions stable during both measurements. Possible differences may be amended basing on the formulas for atmospheric attenuation coefficient α given in ISO 9613-1. Model studies require scaling particular factors in compliance with specified characteristic numbers. For absorption coefficient measurement, these are for example: frequency range or the value of attenuation coefficient m. Thanks to the possibilities of modern electroacoustic transducers, it is no longer a problem to scale the frequencies which have to be proportionally higher. However, it may be problematic to reduce values of the attenuation coefficient. It is practically obtained by drying the air down to a defined relative humidity. Despite the change of frequency range and relative humidity of the air, ISO 9613-1 standard still allows the calculation of the amendment for little differences of the atmospheric conditions in the chamber during measurements. The paper discusses a number of theoretical analyses and experimental measurements performed in order to obtain consistency between the values of attenuation coefficient calculated from the formulas given in the standard and by measurement. The authors performed measurements of reverberation time in a chamber made in a 1/8 scale in a corresponding frequency range, i.e. 800 Hz - 40 kHz and in different values of the relative air humidity (40% 5%). Based on the measurements, empirical values of attenuation coefficient were calculated and compared with theoretical ones. In general, the values correspond with each other, but for high frequencies and low values of relative air humidity the differences are significant. Those discrepancies may directly influence the values of measured sound absorption coefficient and cause errors. Therefore, the authors made an effort to determine an amendment minimizing described inaccuracy.

Keywords: air absorption correction, attenuation coefficient, dimensional analysis, model study, scaled modelling

Procedia PDF Downloads 421
8878 Using the Weakest Precondition to Achieve Self-Stabilization in Critical Networks

Authors: Antonio Pizzarello, Oris Friesen

Abstract:

Networks, such as the electric power grid, must demonstrate exemplary performance and integrity. Integrity depends on the quality of both the system design model and the deployed software. Integrity of the deployed software is key, for both the original versions and the many that occur throughout numerous maintenance activity. Current software engineering technology and practice do not produce adequate integrity. Distributed systems utilize networks where each node is an independent computer system. The connections between them is realized via a network that is normally redundantly connected to guarantee the presence of a path between two nodes in the case of failure of some branch. Furthermore, at each node, there is software which may fail. Self-stabilizing protocols are usually present that recognize failure in the network and perform a repair action that will bring the node back to a correct state. These protocols first introduced by E. W. Dijkstra are currently present in almost all Ethernets. Super stabilization protocols capable of reacting to a change in the network topology due to the removal or addition of a branch in the network are less common but are theoretically defined and available. This paper describes how to use the Software Integrity Assessment (SIA) methodology to analyze self-stabilizing software. SIA is based on the UNITY formalism for parallel and distributed programming, which allows the analysis of code for verifying the progress property p leads-to q that describes the progress of all computations starting in a state satisfying p to a state satisfying q via the execution of one or more system modules. As opposed to demonstrably inadequate test and evaluation methods SIA allows the analysis and verification of any network self-stabilizing software as well as any other software that is designed to recover from failure without external intervention of maintenance personnel. The model to be analyzed is obtained by automatic translation of the system code to a transition system that is based on the use of the weakest precondition.

Keywords: network, power grid, self-stabilization, software integrity assessment, UNITY, weakest precondition

Procedia PDF Downloads 223
8877 Fruits and Vegetable Consumers' Behaviour towards Organised Retailers: Evidence from India

Authors: K. B. Ramappa, A. V. Manjunatha

Abstract:

Consumerism in India is witnessing unprecedented growth driven by favourable demographics, rising young and working population, rising income levels, urbanization and growing brand orientation. In addition, the increasing level of awareness on health, hygiene and quality has made the consumers to think on the fairly traded goods and brands. This has made retailing extremely important to everyone because without retailers’ consumers would not have access to day-to-day products. The increased competition among different retailers has contributed significantly towards rising consumer awareness on quality products and brand loyalty. Many existing empirical studies have mainly focused on net saving of consumers at organised retail via-a-vis unorganised retail shops. In this article, authors have analysed the Bangalore consumers' attitudes towards buying of fruits and vegetables and their choice of retail outlets. The primary data was collected from 100 consumers belonging to the Bangalore City during October 2014. Sample consumers buying at supermarkets, convenience stores and hypermarkets were purposively selected. The collected data was analyzed using descriptive statistics and multinomial logit model. It was found that among all variables, quality and prices were major accountable factors for buying fruits and vegetables at organized retail shops. The empirical result of multinomial logit model reveals that annual net income was positively associated with the Big Bazar and Food World consumers and negatively associated with the Reliance Fresh, More and Niligiris consumers, as compared with the HOPCOMS consumers. Per month expenditure on fruits and vegetables was positively and age of the consumer was negatively related to the consumers’ choice of buying at modern retail markets. Consumers were willing to buy at modern retail outlets irrespective of the distance.

Keywords: organized retailers, consumers' attitude, consumers' preference, fruits, vegetables, multinomial logit, Bangalore

Procedia PDF Downloads 413
8876 Improving the Biomechanical Resistance of a Treated Tooth via Composite Restorations Using Optimised Cavity Geometries

Authors: Behzad Babaei, B. Gangadhara Prusty

Abstract:

The objective of this study is to assess the hypotheses that a restored tooth with a class II occlusal-distal (OD) cavity can be strengthened by designing an optimized cavity geometry, as well as selecting the composite restoration with optimized elastic moduli when there is a sharp de-bonded edge at the interface of the tooth and restoration. Methods: A scanned human maxillary molar tooth was segmented into dentine and enamel parts. The dentine and enamel profiles were extracted and imported into a finite element (FE) software. The enamel rod orientations were estimated virtually. Fifteen models for the restored tooth with different cavity occlusal depths (1.5, 2, and 2.5 mm) and internal cavity angles were generated. By using a semi-circular stone part, a 400 N load was applied to two contact points of the restored tooth model. The junctions between the enamel, dentine, and restoration were considered perfectly bonded. All parts in the model were considered homogeneous, isotropic, and elastic. The quadrilateral and triangular elements were employed in the models. A mesh convergence analysis was conducted to verify that the element numbers did not influence the simulation results. According to the criteria of a 5% error in the stress, we found that a total element number of over 14,000 elements resulted in the convergence of the stress. A Python script was employed to automatically assign 2-22 GPa moduli (with increments of 4 GPa) for the composite restorations, 18.6 GPa to the dentine, and two different elastic moduli to the enamel (72 GPa in the enamel rods’ direction and 63 GPa in perpendicular one). The linear, homogeneous, and elastic material models were considered for the dentine, enamel, and composite restorations. 108 FEA simulations were successively conducted. Results: The internal cavity angles (α) significantly altered the peak maximum principal stress at the interface of the enamel and restoration. The strongest structures against the contact loads were observed in the models with α = 100° and 105. Even when the enamel rods’ directional mechanical properties were disregarded, interestingly, the models with α = 100° and 105° exhibited the highest resistance against the mechanical loads. Regarding the effect of occlusal cavity depth, the models with 1.5 mm depth showed higher resistance to contact loads than the model with thicker cavities (2.0 and 2.5 mm). Moreover, the composite moduli in the range of 10-18 GPa alleviated the stress levels in the enamel. Significance: For the class II OD cavity models in this study, the optimal geometries, composite properties, and occlusal cavity depths were determined. Designing the cavities with α ≥100 ̊ was significantly effective in minimizing peak stress levels. The composite restoration with optimized properties reduced the stress concentrations on critical points of the models. Additionally, when more enamel was preserved, the sturdier enamel-restoration interface against the mechanical loads was observed.

Keywords: dental composite restoration, cavity geometry, finite element approach, maximum principal stress

Procedia PDF Downloads 102
8875 Reverse Logistics Network Optimization for E-Commerce

Authors: Albert W. K. Tan

Abstract:

This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.

Keywords: reverse logistics, supply chain management, optimization, e-commerce

Procedia PDF Downloads 38
8874 A Qualitative Description of the Dynamics in the Interactions between Three Populations: Pollinators, Plants, and Herbivores

Authors: Miriam Sosa-Díaz, Faustino Sánchez-Garduño

Abstract:

In population dynamics the study of both, the abundance and the spatial distribution of the populations in a given habitat, is a fundamental issue a From ecological point of view, the determination of the factors influencing such changes involves important problems. In this paper a mathematical model to describe the temporal dynamic and the spatiotemporal dynamic of the interaction of three populations (pollinators, plants and herbivores) is presented. The study we present is carried out by stages: 1. The temporal dynamics and 2. The spatio-temporal dynamics. In turn, each of these stages is developed by considering three cases which correspond to the dynamics of each type of interaction. For instance, for stage 1, we consider three ODE nonlinear systems describing the pollinator-plant, plant-herbivore and plant-pollinator-herbivore, interactions, respectively. In each of these systems different types of dynamical behaviors are reported. Namely, transcritical and pitchfork bifurcations, existence of a limit cycle, existence of a heteroclinic orbit, etc. For the spatiotemporal dynamics of the two mathematical models a novel factor are introduced. This consists in considering that both, the pollinators and the herbivores, move towards those places of the habitat where the plant population density is high. In mathematical terms, this means that the diffusive part of the pollinators and herbivores equations depend on the plant population density. The analysis of this part is presented by considering pairs of populations, i. e., the pollinator-plant and plant-herbivore interactions and at the end the two mathematical model is presented, these models consist of two coupled nonlinear partial differential equations of reaction-diffusion type. These are defined on a rectangular domain with the homogeneous Neumann boundary conditions. We focused in the role played by the density dependent diffusion term into the coexistence of the populations. For both, the temporal and spatio-temporal dynamics, a several of numerical simulations are included.

Keywords: bifurcation, heteroclinic orbits, steady state, traveling wave

Procedia PDF Downloads 300
8873 Satellite Connectivity for Sustainable Mobility

Authors: Roberta Mugellesi Dow

Abstract:

As the climate crisis becomes unignorable, it is imperative that new services are developed addressing not only the needs of customers but also taking into account its impact on the environment. The Telecommunication and Integrated Application (TIA) Directorate of ESA is supporting the green transition with particular attention to the sustainable mobility.“Accelerating the shift to sustainable and smart mobility” is at the core of the European Green Deal strategy, which seeks a 90% reduction in related emissions by 2050 . Transforming the way that people and goods move is essential to increasing mobility while decreasing environmental impact, and transport must be considered holistically to produce a shared vision of green intermodal mobility. The use of space technologies, integrated with terrestrial technologies, is an enabler of smarter traffic management and increased transport efficiency for automated and connected multimodal mobility. Satellite connectivity, including future 5G networks, and digital technologies such as Digital Twin, AI, Machine Learning, and cloud-based applications are key enablers of sustainable mobility.SatCom is essential to ensure that connectivity is ubiquitously available, even in remote and rural areas, or in case of a failure, by the convergence of terrestrial and SatCom connectivity networks, This is especially crucial when there are risks of network failures or cyber-attacks targeting terrestrial communication. SatCom ensures communication network robustness and resilience. The combination of terrestrial and satellite communication networks is making possible intelligent and ubiquitous V2X systems and PNT services with significantly enhanced reliability and security, hyper-fast wireless access, as well as much seamless communication coverage. SatNav is essential in providing accurate tracking and tracing capabilities for automated vehicles and in guiding them to target locations. SatNav can also enable location-based services like car sharing applications, parking assistance, and fare payment. In addition to GNSS receivers, wireless connections, radar, lidar, and other installed sensors can enable automated vehicles to monitor surroundings, to ‘talk to each other’ and with infrastructure in real-time, and to respond to changes instantaneously. SatEO can be used to provide the maps required by the traffic management, as well as evaluate the conditions on the ground, assess changes and provide key data for monitoring and forecasting air pollution and other important parameters. Earth Observation derived data are used to provide meteorological information such as wind speed and direction, humidity, and others that must be considered into models contributing to traffic management services. The paper will provide examples of services and applications that have been developed aiming to identify innovative solutions and new business models that are allowed by new digital technologies engaging space and non space ecosystem together to deliver value and providing innovative, greener solutions in the mobility sector. Examples include Connected Autonomous Vehicles, electric vehicles, green logistics, and others. For the technologies relevant are the hybrid satcom and 5G providing ubiquitous coverage, IoT integration with non space technologies, as well as navigation, PNT technology, and other space data.

Keywords: sustainability, connectivity, mobility, satellites

Procedia PDF Downloads 133
8872 Bioresorbable Medicament-Eluting Grommet Tube for Otitis Media with Effusion

Authors: Chee Wee Gan, Anthony Herr Cheun Ng, Yee Shan Wong, Subbu Venkatraman, Lynne Hsueh Yee Lim

Abstract:

Otitis media with effusion (OME) is the leading cause of hearing loss in children worldwide. Surgery to insert grommet tube into the eardrum is usually indicated for OME unresponsive to antimicrobial therapy. It is the most common surgery for children. However, current commercially available grommet tubes are non-bioresorbable, not drug-treated, with unpredictable duration of retention on the eardrum to ventilate middle ear. Their functionality is impaired when clogged or chronically infected, requiring additional surgery to remove/reinsert grommet tubes. We envisaged that a novel fully bioresorbable grommet tube with sustained antibiotic release technology could address these drawbacks. In this study, drug-loaded bioresorbable poly(L-lactide-co-ε-caprolactone)(PLC) copolymer grommet tubes were fabricated by microinjection moulding technique. In vitro drug release and degradation model of PLC tubes were studied. Antibacterial property was evaluated by incubating PLC tubes with P. aeruginosa broth. Surface morphology was analyzed using scanning electron microscopy. A preliminary animal study was conducted using guinea pigs as an in vivo model to evaluate PLC tubes with and without drug, with commercial Mini Shah grommet tube as comparison. Our in vitro data showed sustained drug release over 3 months. All PLC tubes revealed exponential degradation profiles over time. Modeling predicted loss of tube functionality in water to be approximately 14 weeks and 17 weeks for PLC with and without drug, respectively. Generally, PLC tubes had less bacteria adherence, which were attributed to the much smoother tube surfaces compared to Mini Shah. Antibiotic from PLC tube further made bacteria adherence on surface negligible. They showed neither inflammation nor otorrhea after 18 weeks post-insertion in the eardrums of guinea pigs, but had demonstrated severe degree of bioresorption. Histology confirmed the new PLC tubes were biocompatible. Analyses on the PLC tubes in the eardrums showed bioresorption profiles close to our in vitro degradation models. The bioresorbable antibiotic-loaded grommet tubes showed good predictability in functionality. The smooth surface and sustained release technology reduced the risk of tube infection. Tube functional duration of 18 weeks allowed sufficient ventilation period to treat OME. Our ongoing studies include modifying the surface properties with protein coating, optimizing the drug dosage in the tubes to enhance their performances, evaluating their functional outcome on hearing after full resoption of grommet tube and healing of eardrums, and developing animal model with OME to further validate our in vitro models.

Keywords: bioresorbable polymer, drug release, grommet tube, guinea pigs, otitis media with effusion

Procedia PDF Downloads 450
8871 Sentiment Analysis on University Students’ Evaluation of Teaching and Their Emotional Engagement

Authors: Elisa Santana-Monagas, Juan L. Núñez, Jaime León, Samuel Falcón, Celia Fernández, Rocío P. Solís

Abstract:

Teaching practices have been widely studied in relation to students' outcomes, positioning themselves as one of their strongest catalysts and influencing students' emotional experiences. In the higher education context, teachers become even more crucial as many students ground their decisions on which courses to enroll in based on opinions and ratings of teachers from other students. Unfortunately, sometimes universities do not provide the personal, social, and academic stimulation students demand to be actively engaged. To evaluate their teachers, universities often rely on students' evaluations of teaching (SET) collected via Likert scale surveys. Despite its usefulness, such a method has been questioned in terms of validity and reliability. Alternatively, researchers can rely on qualitative answers to open-ended questions. However, the unstructured nature of the answers and a large amount of information obtained requires an overwhelming amount of work. The present work presents an alternative approach to analyse such data: sentiment analysis. To the best of our knowledge, no research before has included results from SA into an explanatory model to test how students' sentiments affect their emotional engagement in class. The sample of the present study included a total of 225 university students (Mean age = 26.16, SD = 7.4, 78.7 % women) from the Educational Sciences faculty of a public university in Spain. Data collection took place during the academic year 2021-2022. Students accessed an online questionnaire using a QR code. They were asked to answer the following open-ended question: "If you had to explain to a peer who doesn't know your teacher how he or she communicates in class, what would you tell them?". Sentiment analysis was performed using Microsoft's pre-trained model. The reliability of the measure was estimated between the tool and one of the researchers who coded all answers independently. The Cohen's kappa and the average pairwise percent agreement were estimated with ReCal2. Cohen's kappa was .68, and the agreement reached was 90.8%, both considered satisfactory. To test the hypothesis relations among SA and students' emotional engagement, a structural equation model (SEM) was estimated. Results demonstrated a good fit of the data: RMSEA = .04, SRMR = .03, TLI = .99, CFI = .99. Specifically, the results showed that student’s sentiment regarding their teachers’ teaching positively predicted their emotional engagement (β == .16 [.02, -.30]). In other words, when students' opinion toward their instructors' teaching practices is positive, it is more likely for students to engage emotionally in the subject. Altogether, the results show a promising future for sentiment analysis techniques in the field of education. They suggest the usefulness of this tool when evaluating relations among teaching practices and student outcomes.

Keywords: sentiment analysis, students' evaluation of teaching, structural-equation modelling, emotional engagement

Procedia PDF Downloads 85
8870 Effects of pH, Load Capacity and Contact Time in the Sulphate Sorption onto a Functionalized Mesoporous Structure

Authors: Jaime Pizarro, Ximena Castillo

Abstract:

The intensive use of water in agriculture, industry, human consumption and increasing pollution are factors that reduce the availability of water for future generations; the challenge is to advance in sustainable and low-cost solutions to reuse water and to facilitate the availability of the resource in quality and quantity. The use of new low-cost materials with sorbent capacity for pollutants is a solution that contributes to the improvement and expansion of water treatment and reuse systems. Fly ash, a residue from the combustion of coal in power plants that is produced in large quantities in newly industrialized countries, contains a high amount of silicon oxides and aluminum oxides, whose properties can be used for the synthesis of mesoporous materials. Properly functionalized, this material allows obtaining matrixes with high sorption capacity. The mesoporous materials have a large surface area, thermal and mechanical stability, uniform porous structure, and high sorption and functionalization capacities. The goal of this study was to develop hexagonal mesoporous siliceous material (HMS) for the adsorption of sulphate from industrial and mining waters. The silica was extracted from fly ash after calcination at 850 ° C, followed by the addition of water. The mesoporous structure has a surface area of 282 m2 g-1 and a size of 5.7 nm and was functionalized with ethylene diamine through of a self-assembly method. The material was characterized by Diffuse Reflectance Infrared Fourier Transform Spectroscopy (DRIFTS). The capacity of sulphate sorption was evaluated according to pH, maximum load capacity and contact time. The sulphate maximum adsorption capacity was 146.1 mg g-1, which is three times higher than commercial sorbents. The kinetic data were fitted according to a pseudo-second order model with a high coefficient of linear regression at different initial concentrations. The adsorption isotherm that best fitted the experimental data was the Freundlich model.

Keywords: fly ash, mesoporous siliceous, sorption, sulphate

Procedia PDF Downloads 156
8869 Collagen Deposition in Lung Parenchyma Driven by Depletion of LYVE-1+ Macrophages Protects Emphysema and Loss of Airway Function

Authors: Yinebeb Mezgebu Dagnachew, Hwee Ying Lim, Liao Wupeng, Sheau Yng Lim, Lim Sheng Jie Natalie, Veronique Angeli

Abstract:

Collagen is essential for maintaining lung structure and function, and its remodeling has been associated with respiratory diseases, including chronic obstructive pulmonary disease (COPD). However, the cellular mechanisms driving collagen remodeling and the functional implications of this process in the pathophysiology of pulmonary diseases remain poorly understood. Using a mouse model of Lyve-1 expressing macrophage depletion, we found that the absence of this subpopulation of tissue-resident macrophage led to the preferential deposition of type I collagen fibers around the alveoli and bronchi in the steady state. Further analysis by polarized light microscopy revealed that the collagen fibers accumulating in the lungs depleted of Lyve-1+ macrophages were thicker and crosslinked. A decrease in MMP-9 gene expression and proteolytic activity, together with an increase in Col1a1, Timp-3 and Lox gene expression, accompanied the collagen alterations. Next, we investigated the effect of the collagen remodeling on the pathophysiology of COPD and airway function in mouse lacking Lyve-1+ macrophage exposed chronically to cigarette smoke (CS), a well-established animal model of COPD. We showed that the deposition of collagen protected mouse against the destruction of alveoli (emphysema) and bronchi thickening after CS exposure and prevented loss of airway function. Thus, we demonstrate that interstitial Lyve-1+ macrophages regulate the composition, amount, and architecture of the collagen network in the lungs and that such collagen remodeling functionally impacts the development of COPD. This study further supports the potential of targeting collagen as a promising approach to treating respiratory diseases.

Keywords: lung, extracellular matrix, chronic obstructive pulmonary disease, matrix metalloproteinases, collagen

Procedia PDF Downloads 37
8868 Review and Analysis of Parkinson's Tremor Genesis Using Mathematical Model

Authors: Pawan Kumar Gupta, Sumana Ghosh

Abstract:

Parkinson's Disease (PD) is a long-term neurodegenerative movement disorder of the central nervous system with vast symptoms related to the motor system. The common symptoms of PD are tremor, rigidity, bradykinesia/akinesia, and postural instability, but the clinical symptom includes other motor and non‐motor issues. The motor symptoms of the disease are consequence of death of the neurons in a region of the midbrain known as substantia nigra pars compacta, leading to decreased level of a neurotransmitter known as dopamine. The cause of this neuron death is not clearly known but involves formation of Lewy bodies, an abnormal aggregation or clumping of the protein alpha-synuclein in the neurons. Unfortunately, there is no cure for PD, and the management of this disease is challenging. Therefore, it is critical for a patient to be diagnosed at early stages. A limited choice of drugs is available to improve the symptoms, but those become less and less effective over time. Apart from that, with rapid growth in the field of science and technology, other methods such as multi-area brain stimulation are used to treat patients. In order to develop advanced techniques and to support drug development for treating PD patients, an accurate mathematical model is needed to explain the underlying relationship of dopamine secretion in the brain with the hand tremors. There has been a lot of effort in the past few decades on modeling PD tremors and treatment effects from a computational point of view. These models can effectively save time as well as the cost of drug development for the pharmaceutical industry and be helpful for selecting appropriate treatment mechanisms among all possible options. In this review paper, an effort is made to investigate studies on PD modeling and analysis and to highlight some of the key advances in the field over the past centuries with discussion on the current challenges.

Keywords: Parkinson's disease, deep brain stimulation, tremor, modeling

Procedia PDF Downloads 140
8867 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models

Authors: A. B. M. Rezaul Islam, Ernur Karadogan

Abstract:

Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.

Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis

Procedia PDF Downloads 144
8866 Land Management Framework: A Case of Kolkata

Authors: Alokananda Nath

Abstract:

Land is an important issue anywhere in the world as it is one of the fundamental elements in human settlements. Since the urban areas are considered to be the drivers of economy for any country across the world and the phenomenon of ‘urbanization’ happening everywhere, there is always a greater pressure on urban land and its management. Many states in India have realized the importance of land as a valuable resource and have implemented certain framework for managing and developing land. But in West Bengal no such statutory framework has been formulated till now and a very out dated model of land acquisition for public purpose is practiced. Due to the lop-sided character of urban growth in the entire eastern region of India, the city of Kolkata continues to bear the burden of excessive growth of population and consequent urbanization of the adjoining areas at a rapid pace. This research tries to look into these conflicts with respect to the present pattern of development in the context of Kolkata and suggest a system for land management in order to implement the planning processes. For this purpose, five case study areas were taken up within the Kolkata Metropolitan Area and subsequent analysis of their present land management and development techniques was done. The findings reveal that there is a lack of political will as well as administrative inefficiency on part of both the development authority and the local bodies. Mostly the local bodies lack the financial resources and technical expertise to work out any kind of land management framework or work out any kind of model in order to manage the development that is happening. All these place undue strain on city infrastructure systems and reduce the potential of cities to contribute as engines of economic growth. The focus of reforms, therefore, ought to be on streamlining the urban planning process, judicious and optimal land use, efficient plan implementation mechanisms, improvement of titling and registration processes.

Keywords: urbanization, land management framework, land development, policy reforms, land-use planning processes

Procedia PDF Downloads 278
8865 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game

Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin

Abstract:

Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.

Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design

Procedia PDF Downloads 421
8864 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column

Authors: G. Rajapakse, S. Jayasinghe, A. Fleming

Abstract:

This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.

Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter

Procedia PDF Downloads 113
8863 Reliability Analysis of Geometric Performance of Onboard Satellite Sensors: A Study on Location Accuracy

Authors: Ch. Sridevi, A. Chalapathi Rao, P. Srinivasulu

Abstract:

The location accuracy of data products is a critical parameter in assessing the geometric performance of satellite sensors. This study focuses on reliability analysis of onboard sensors to evaluate their performance in terms of location accuracy performance over time. The analysis utilizes field failure data and employs the weibull distribution to determine the reliability and in turn to understand the improvements or degradations over a period of time. The analysis begins by scrutinizing the location accuracy error which is the root mean square (RMS) error of differences between ground control point coordinates observed on the product and the map and identifying the failure data with reference to time. A significant challenge in this study is to thoroughly analyze the possibility of an infant mortality phase in the data. To address this, the Weibull distribution is utilized to determine if the data exhibits an infant stage or if it has transitioned into the operational phase. The shape parameter beta plays a crucial role in identifying this stage. Additionally, determining the exact start of the operational phase and the end of the infant stage poses another challenge as it is crucial to eliminate residual infant mortality or wear-out from the model, as it can significantly increase the total failure rate. To address this, an approach utilizing the well-established statistical Laplace test is applied to infer the behavior of sensors and to accurately ascertain the duration of different phases in the lifetime and the time required for stabilization. This approach also helps in understanding if the bathtub curve model, which accounts for the different phases in the lifetime of a product, is appropriate for the data and whether the thresholds for the infant period and wear-out phase are accurately estimated by validating the data in individual phases with Weibull distribution curve fitting analysis. Once the operational phase is determined, reliability is assessed using Weibull analysis. This analysis not only provides insights into the reliability of individual sensors with regards to location accuracy over the required period of time, but also establishes a model that can be applied to automate similar analyses for various sensors and parameters using field failure data. Furthermore, the identification of the best-performing sensor through this analysis serves as a benchmark for future missions and designs, ensuring continuous improvement in sensor performance and reliability. Overall, this study provides a methodology to accurately determine the duration of different phases in the life data of individual sensors. It enables an assessment of the time required for stabilization and provides insights into the reliability during the operational phase and the commencement of the wear-out phase. By employing this methodology, designers can make informed decisions regarding sensor performance with regards to location accuracy, contributing to enhanced accuracy in satellite-based applications.

Keywords: bathtub curve, geometric performance, Laplace test, location accuracy, reliability analysis, Weibull analysis

Procedia PDF Downloads 65
8862 Protected Cultivation of Horticultural Crops: Increases Productivity per Unit of Area and Time

Authors: Deepak Loura

Abstract:

The most contemporary method of producing horticulture crops both qualitatively and quantitatively is protected cultivation, or greenhouse cultivation, which has gained widespread acceptance in recent decades. Protected farming, commonly referred to as controlled environment agriculture (CEA), is extremely productive, land- and water-wise, as well as environmentally friendly. The technology entails growing horticulture crops in a controlled environment where variables such as temperature, humidity, light, soil, water, fertilizer, etc. are adjusted to achieve optimal output and enable a consistent supply of them even during the off-season. Over the past ten years, protected cultivation of high-value crops and cut flowers has demonstrated remarkable potential. More and more agricultural and horticultural crop production systems are moving to protected environments as a result of the growing demand for high-quality products by global markets. By covering the crop, it is possible to control the macro- and microenvironments, enhancing plant performance and allowing for longer production times, earlier harvests, and higher yields of higher quality. These shielding features alter the environment of the plant while also offering protection from wind, rain, and insects. Protected farming opens up hitherto unexplored opportunities in agriculture as the liberalised economy and improved agricultural technologies advance. Typically, the revenues from fruit, vegetable, and flower crops are 4 to 8 times higher than those from other crops. If any of these high-value crops are cultivated in protected environments like greenhouses, net houses, tunnels, etc., this profit can be multiplied. Vegetable and cut flower post-harvest losses are extremely high (20–0%), however sheltered growing techniques and year-round cropping can greatly minimize post-harvest losses and enhance yield by 5–10 times. Seasonality and weather have a big impact on the production of vegetables and flowers. The variety of their products results in significant price and quality changes for vegetables. For the application of current technology in crop production, achieving a balance between year-round availability of vegetables and flowers with minimal environmental impact and remaining competitive is a significant problem. The future of agriculture will be protected since population growth is reducing the amount of land that may be held. Protected agriculture is a particularly profitable endeavor for tiny landholdings. Small greenhouses, net houses, nurseries, and low tunnel greenhouses can all be built by farmers to increase their income. Protected agriculture is also aided by the rise in biotic and abiotic stress factors. As a result of the greater productivity levels, these technologies are not only opening up opportunities for producers with larger landholdings, but also for those with smaller holdings. Protected cultivation can be thought of as a kind of precise, forward-thinking, parallel agriculture that covers almost all aspects of farming and is rather subject to additional inspection for technical applicability to circumstances, farmer economics, and market economics.

Keywords: protected cultivation, horticulture, greenhouse, vegetable, controlled environment agriculture

Procedia PDF Downloads 76
8861 The Use of Psychological Tests in Polish Organizations - Empirical Evidence

Authors: Milena Gojny-Zbierowska

Abstract:

In the last decades psychological tests have been gaining in popularity as a method used for evaluating personnel, and they bring consulting companies solid profits rising by up to 10% each year. The market is offering a growing range of tools for the assessment of personality. Tests are used in organizations mainly in the recruitment and selection of staff. This paper is an attempt to initially diagnose the state of the use of psychological tests in Polish companies on the basis of empirical research.

Keywords: psychological tests, personality, content analysis, NEO FFI, big five personality model

Procedia PDF Downloads 365