Search results for: equivalent linear approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17253

Search results for: equivalent linear approach

16803 Synthesis and Characterizations of Lead-free BaO-Doped TeZnCaB Glass Systems for Radiation Shielding Applications

Authors: Rezaul K. Sk., Mohammad Ashiq, Avinash K. Srivastava

Abstract:

The use of radiation shielding technology ranging from EMI to high energy gamma rays in various areas such as devices, medical science, defense, nuclear power plants, medical diagnostics etc. is increasing all over the world. However, exposure to different radiations such as X-ray, gamma ray, neutrons and EMI above the permissible limits is harmful to living beings, the environment and sensitive laboratory equipment. In order to solve this problem, there is a need to develop effective radiation shielding materials. Conventionally, lead and lead-based materials are used in making shielding materials, as lead is cheap, dense and provides very effective shielding to radiation. However, the problem associated with the use of lead is its toxic nature and carcinogenic. So, to overcome these drawbacks, there is a great need for lead-free radiation shielding materials and that should also be economically sustainable. Therefore, it is necessary to look for the synthesis of radiation-shielding glass by using other heavy metal oxides (HMO) instead of lead. The lead-free BaO-doped TeZnCaB glass systems have been synthesized by the traditional melt-quenching method. X-ray diffraction analysis confirmed the glassy nature of the synthesized samples. The densities of the developed glass samples were increased by doping the BaO concentration, ranging from 4.292 to 4.725 g/cm3. The vibrational and bending modes of the BaO-doped glass samples were analyzed by Raman spectroscopy, and FTIR (Fourier-transform infrared spectroscopy) was performed to study the functional group present in the samples. UV-visible characterization revealed the significance of optical parameters such as Urbach’s energy, refractive index and optical energy band gap. The indirect and direct energy band gaps were decreased with the BaO concentration whereas the refractive index was increased. X-ray attenuation measurements were performed to determine the radiation shielding parameters such as linear attenuation coefficient (LAC), mass attenuation coefficient (MAC), half value layer (HVL), tenth value layer (TVL), mean free path (MFP), attenuation factor (Att%) and lead equivalent thickness of the lead-free BaO-doped TeZnCaB glass system. It was observed that the radiation shielding characteristics were enhanced with the addition of BaO content in the TeZnCaB glass samples. The glass samples with higher contents of BaO have the best attenuation performance. So, it could be concluded that the addition of BaO into TeZnCaB glass samples is a significant technique to improve the radiation shielding performance of the glass samples. The best lead equivalent thickness was 2.626 mm, and these glasses could be good materials for medical diagnostics applications.

Keywords: heavy metal oxides, lead-free, melt-quenching method, x-ray attenuation

Procedia PDF Downloads 31
16802 The Univalence Principle: Equivalent Mathematical Structures Are Indistinguishable

Authors: Michael Shulman, Paige North, Benedikt Ahrens, Dmitris Tsementzis

Abstract:

The Univalence Principle is the statement that equivalent mathematical structures are indistinguishable. We prove a general version of this principle that applies to all set-based, categorical, and higher-categorical structures defined in a non-algebraic and space-based style, as well as models of higher-order theories such as topological spaces. In particular, we formulate a general definition of indiscernibility for objects of any such structure, and a corresponding univalence condition that generalizes Rezk’s completeness condition for Segal spaces and ensures that all equivalences of structures are levelwise equivalences. Our work builds on Makkai’s First-Order Logic with Dependent Sorts, but is expressed in Voevodsky’s Univalent Foundations (UF), extending previous work on the Structure Identity Principle and univalent categories in UF. This enables indistinguishability to be expressed simply as identification, and yields a formal theory that is interpretable in classical homotopy theory, but also in other higher topos models. It follows that Univalent Foundations is a fully equivalence-invariant foundation for higher-categorical mathematics, as intended by Voevodsky.

Keywords: category theory, higher structures, inverse category, univalence

Procedia PDF Downloads 151
16801 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization

Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller

Abstract:

The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.

Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization

Procedia PDF Downloads 34
16800 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 203
16799 Study and Simulation of a Dynamic System Using Digital Twin

Authors: J.P. Henriques, E. R. Neto, G. Almeida, G. Ribeiro, J.V. Coutinho, A.B. Lugli

Abstract:

Industry 4.0, or the Fourth Industrial Revolution, is transforming the relationship between people and machines. In this scenario, some technologies such as Cloud Computing, Internet of Things, Augmented Reality, Artificial Intelligence, Additive Manufacturing, among others, are making industries and devices increasingly intelligent. One of the most powerful technologies of this new revolution is the Digital Twin, which allows the virtualization of a real system or process. In this context, the present paper addresses the linear and nonlinear dynamic study of a didactic level plant using Digital Twin. In the first part of the work, the level plant is identified at a fixed point of operation, BY using the existing method of least squares means. The linearized model is embedded in a Digital Twin using Automation Studio® from Famous Technologies. Finally, in order to validate the usage of the Digital Twin in the linearized study of the plant, the dynamic response of the real system is compared to the Digital Twin. Furthermore, in order to develop the nonlinear model on a Digital Twin, the didactic level plant is identified by using the method proposed by Hammerstein. Different steps are applied to the plant, and from the Hammerstein algorithm, the nonlinear model is obtained for all operating ranges of the plant. As for the linear approach, the nonlinear model is embedded in the Digital Twin, and the dynamic response is compared to the real system in different points of operation. Finally, yet importantly, from the practical results obtained, one can conclude that the usage of Digital Twin to study the dynamic systems is extremely useful in the industrial environment, taking into account that it is possible to develop and tune controllers BY using the virtual model of the real systems.

Keywords: industry 4.0, digital twin, system identification, linear and nonlinear models

Procedia PDF Downloads 148
16798 A Survey on Quasi-Likelihood Estimation Approaches for Longitudinal Set-ups

Authors: Naushad Mamode Khan

Abstract:

The Com-Poisson (CMP) model is one of the most popular discrete generalized linear models (GLMS) that handles both equi-, over- and under-dispersed data. In longitudinal context, an integer-valued autoregressive (INAR(1)) process that incorporates covariate specification has been developed to model longitudinal CMP counts. However, the joint likelihood CMP function is difficult to specify and thus restricts the likelihood based estimating methodology. The joint generalized quasilikelihood approach (GQL-I) was instead considered but is rather computationally intensive and may not even estimate the regression effects due to a complex and frequently ill conditioned covariance structure. This paper proposes a new GQL approach for estimating the regression parameters (GQLIII) that are based on a single score vector representation. The performance of GQL-III is compared with GQL-I and separate marginal GQLs (GQL-II) through some simulation experiments and is proved to yield equally efficient estimates as GQL-I and is far more computationally stable.

Keywords: longitudinal, com-Poisson, ill-conditioned, INAR(1), GLMS, GQL

Procedia PDF Downloads 354
16797 Financial Assets Return, Economic Factors and Investor's Behavioral Indicators Relationships Modeling: A Bayesian Networks Approach

Authors: Nada Souissi, Mourad Mroua

Abstract:

The main purpose of this study is to examine the interaction between financial asset volatility, economic factors and investor's behavioral indicators related to both the company's and the markets stocks for the period from January 2000 to January2020. Using multiple linear regression and Bayesian Networks modeling, results show a positive and negative relationship between investor's psychology index, economic factors and predicted stock market return. We reveal that the application of the Bayesian Discrete Network contributes to identify the different cause and effect relationships between all economic, financial variables and psychology index.

Keywords: Financial asset return predictability, Economic factors, Investor's psychology index, Bayesian approach, Probabilistic networks, Parametric learning

Procedia PDF Downloads 149
16796 A Historical Analysis of The Concept of Equivalence from Different Theoretical Perspectives in Translation Studies

Authors: Amenador Kate Benedicta, Wang Zhiwei

Abstract:

Since the later parts of the 20th century, the notion of equivalence continues to be a central and critical concept in the development of translation theory. After decades of arguments over word-for-word and free translations methods, scholars attempting to develop more systematic and efficient translation theories began to focus on fundamental translation concepts such as equivalence. Although the concept of equivalence has piqued the interest of many scholars, its definition, scope, and applicability have sparked contentious arguments within the discipline. As a result, several distinct theories and explanations on the concept of equivalence have been put forward over the last half-century. Thus, this study explores and discusses the evolution of the critical concept of equivalence in translation studies through a bibliometric method of investigation of manual and digital books and articles by analyzing different scholars' key contributions and limitations on equivalence from various theoretical perspectives. While analyzing them, emphasis is placed on the innovations that each theory has brought to the comprehension of equivalence. In order to achieve the aim of the study, the article began by discussing the contributions of linguistically motivated theories to the notion of equivalence in translation, followed by functionalist-oriented contributions, before moving on to more recent advancements in translation studies on the concept. Because equivalence is such a broad notion, it is impossible to discuss each researcher in depth. As a result, the most well-known names and their equivalent theories are compared and contrasted in this research. The study emphasizes the developmental progression in our comprehension of the equivalence concept and equivalent effect. It concluded that the various theoretical perspective's contributions to the notion of equivalence rather complement and make up for the limitations of each other. The study also highlighted how troublesome the equivalent concept might become in terms of identifying the nature of translation and how central and unavoidable the concept is in every translation action, despite its limitations. The significance of the study lies in its synthesis of the different contributions and limitations of the various theories offered by scholars on the notion of equivalence, lending literature to both student and scholars in the field, and providing insight on future theoretical development

Keywords: equivalence, functionalist translation theories, linguistic translation approaches, translation theories, Skopos

Procedia PDF Downloads 113
16795 The Circularity of Re-Refined Used Motor Oils: Measuring Impacts and Ensuring Responsible Procurement

Authors: Farah Kanani

Abstract:

Blue Tide Environmental is a company focused on developing a network of used motor oil recycling facilities across the U.S. They initiated the redesign of its recycling plant in Texas, and aimed to establish an updated carbon footprint of re-refined used motor oils compared to an equivalent product derived from virgin stock that is not re-refined. The aim was to quantify emissions savings of a circular alternative to conventional end-of-life combustion of used motor oil (UMO). To do so, they mandated an ISO-compliant carbon footprint, utilizing complex models requiring geographical and temporal accuracy to accommodate the U.S. refinery market. The quantification of linear and circular flows, proxies for fuel substitution and system expansion for multi-product outputs were all critical methodological choices and were tested through sensitivity analyses. The re-refined system consisted of continuous recycling of UMO and thus, end-of-life is considered non-existent. The unique perspective to this topic will be from a life cycle i.e. holistic one and essentially demonstrate using this example of how a cradle-to-cradle model can be used to quantify a comparative carbon footprint. The intended audience is lubricant manufacturers as the consumers, motor oil industry professionals and other industry members interested in performing a cradle-to-cradle modeling.

Keywords: circularity, used motor oil, re-refining, systems expansion

Procedia PDF Downloads 31
16794 Design of Microwave Building Block by Using Numerical Search Algorithm

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

With the development of technology, countries gradually allocated more and more frequency spectrums for civilization and commercial usage, especially those high radio frequency bands indicating high information capacity. The field effect becomes more and more prominent in microwave components as frequency increases, which invalidates the transmission line theory and complicate the design of microwave components. Here a modeling approach based on numerical search algorithm is proposed to design various building blocks for microwave circuits to avoid complicated impedance matching and equivalent electrical circuit approximation. Concretely, a microwave component is discretized to a set of segments along the microwave propagation path. Each of the segment is initialized with random dimensions, which constructs a multiple-dimension parameter space. Then numerical searching algorithms (e.g. Pattern search algorithm) are used to find out the ideal geometrical parameters. The optimal parameter set is achieved by evaluating the fitness of S parameters after a number of iterations. We had adopted this approach in our current projects and designed many microwave components including sharp bends, T-branches, Y-branches, microstrip-to-stripline converters and etc. For example, a stripline 90° bend was designed in 2.54 mm x 2.54 mm space for dual-band operation (Ka band and Ku band) with < 0.18 dB insertion loss and < -55 dB reflection. We expect that this approach can enrich the tool kits for microwave designers.

Keywords: microwave component, microstrip and stripline, bend, power division, the numerical search algorithm.

Procedia PDF Downloads 379
16793 A Mixed Integer Linear Programming Model for Flexible Job Shop Scheduling Problem

Authors: Mohsen Ziaee

Abstract:

In this paper, a mixed integer linear programming (MILP) model is presented to solve the flexible job shop scheduling problem (FJSP). This problem is one of the hardest combinatorial problems. The objective considered is the minimization of the makespan. The computational results of the proposed MILP model were compared with those of the best known mathematical model in the literature in terms of the computational time. The results show that our model has better performance with respect to all the considered performance measures including relative percentage deviation (RPD) value, number of constraints, and total number of variables. By this improved mathematical model, larger FJS problems can be optimally solved in reasonable time, and therefore, the model would be a better tool for the performance evaluation of the approximation algorithms developed for the problem.

Keywords: scheduling, flexible job shop, makespan, mixed integer linear programming

Procedia PDF Downloads 184
16792 Monthly River Flow Prediction Using a Nonlinear Prediction Method

Authors: N. H. Adenan, M. S. M. Noorani

Abstract:

River flow prediction is an essential to ensure proper management of water resources can be optimally distribute water to consumers. This study presents an analysis and prediction by using nonlinear prediction method involving monthly river flow data in Tanjung Tualang from 1976 to 2006. Nonlinear prediction method involves the reconstruction of phase space and local linear approximation approach. The phase space reconstruction involves the reconstruction of one-dimensional (the observed 287 months of data) in a multidimensional phase space to reveal the dynamics of the system. Revenue of phase space reconstruction is used to predict the next 72 months. A comparison of prediction performance based on correlation coefficient (CC) and root mean square error (RMSE) have been employed to compare prediction performance for nonlinear prediction method, ARIMA and SVM. Prediction performance comparisons show the prediction results using nonlinear prediction method is better than ARIMA and SVM. Therefore, the result of this study could be used to developed an efficient water management system to optimize the allocation water resources.

Keywords: river flow, nonlinear prediction method, phase space, local linear approximation

Procedia PDF Downloads 412
16791 H∞ Takagi-Sugeno Fuzzy State-Derivative Feedback Control Design for Nonlinear Dynamic Systems

Authors: N. Kaewpraek, W. Assawinchaichote

Abstract:

This paper considers an H TS fuzzy state-derivative feedback controller for a class of nonlinear dynamical systems. A Takagi-Sugeno (TS) fuzzy model is used to approximate a class of nonlinear dynamical systems. Then, based on a linear matrix inequality (LMI) approach, we design an HTS fuzzy state-derivative feedback control law which guarantees L2-gain of the mapping from the exogenous input noise to the regulated output to be less or equal to a prescribed value. We derive a sufficient condition such that the system with the fuzzy controller is asymptotically stable and H performance is satisfied. Finally, we provide and simulate a numerical example is provided to illustrate the stability and the effectiveness of the proposed controller.

Keywords: h-infinity fuzzy control, an LMI approach, Takagi-Sugano (TS) fuzzy system, the photovoltaic systems

Procedia PDF Downloads 384
16790 Seismic Performance of Two-Storey RC Frame Designed EC8 under In-Plane Cyclic Loading

Authors: N. H. Hamid, A. Azmi, M. I. Adiyanto

Abstract:

This main purpose of this paper is to evaluate the seismic performance of double bay two-storey reinforced concrete frame under in-plane lateral cyclic loading which designed using Eurocode 8 (EC8) by taking into account of seismic loading. The prototype model of reinforced concrete frame was constructed in one-half scale tested under in-plane lateral cyclic loading starts with ±0.2% drift, ±0.25% up to ±3.0% drift with the increment of ±0.25%. The performance of the RC frame is evaluated in terms of the hysteresis loop (load vs. displacement), stiffness, ductility, lateral strength, stress-strain relationship and equivalent viscous damping. Visual observation of the crack pattern after testing were observed where the beam- column joint suffer the most severe damage as it is the critical part in moment resisting frame. Spalling of concrete starts occurred at ±2.0% drift and become worse at ±2.5% drift. The experimental result shows that the maximum lateral strength of specimen is 99.98 kN and ductility of the specimen is µ=4.07 which lies between 3≤µ≤6 in order to withstand moderate to severe earthquakes.

Keywords: ductility, equivalent viscous damping, hysteresis loops, lateral strength, stiffness

Procedia PDF Downloads 357
16789 Translation Methods Applied While Dealing With System-Bound Terms (Polish-English Translation)

Authors: Anna Kizinska

Abstract:

The research aims at discussing Polish and British incongruent terms that refer to company law. The Polish terms under analysis appear in the Polish Code of Commercial Partnerships and Companies and constitute legal terms or factual terms. The English equivalents of each Polish term under research appear in two Polish Code of Commercial Partnerships and Companies translations into English. The theoretical part of the paper includes the presentation of the definitions of a system-bound term and incongruity of terms. The aim of the analysis is to check if the classification of translation methods used in civil law terms translation comprehends the translation methods applied while translating company law terms into English. The translation procedures are defined according to Newmark. The stages of the research include 1) presentation of a definition of a Polish term, 2) enumerating the so-far published English equivalents of a given Polish term and comparing their definitions (as long as they appear in English law dictionaries ) with the definition of a given Polish term under analysis, 3) checking whether an English equivalent appears or not in, among others, the sources of the British law (legislation.gov.uk database) , 4) identifying the translation method that was applied while forming a given English equivalent.

Keywords: translation, legal terms, equivalence, company law, incongruency

Procedia PDF Downloads 89
16788 Achieving 13th Sustainable Development Goal: Urbanization and ICT Empowerment in Pursuit of Carbon Neutrality - Beyond Linear Thinking

Authors: Salim Khan

Abstract:

The attainment of the carbon neutrality objective and Sustainable Development Goal 13 (SDG-13) target, which pertains to climate actions, received widespread attention in developing and emerging nations. Given the increasing pace of urbanization, technological advancements, and rapid growth, it is imperative to examine the linear and nonlinear effects of urbanization and economic growth and the linear impact of information and communication technology (ICT) on carbon emissions (CO2e). This study employs the Dynamic System GMM (DSGMM) and Panel Quantile Regression (PQR) methodologies to investigate the causal relationship between urbanization, ICT, economic growth, and their interplay on CO2e in 39 BRI countries from 2001 to 2020. The study's findings indicate that the impact of urbanization on CO2e exhibits linear and nonlinear patterns. The specific nonlinear impact of urbanization leads to a decrease in CO2e, hence facilitating the achievement of carbon neutrality and contributing to SDG-13. The study highlights the importance of ICT in achieving SDG-13 by reducing CO2e, emphasizing the need for informatization. Simultaneously, the findings support the Environmental Kuznets Curve (EKC) hypothesis and support the pollution haven theory. Finally, based on empirical findings, significant policy implications are suggested for achieving SGD 13 and carbon neutrality.

Keywords: urbanization, ICT, CO2 emission, EKC, pollution haven, BRI

Procedia PDF Downloads 25
16787 Free Vibration Analysis of FG Nanocomposite Sandwich Beams Using Various Higher-Order Beam Theories

Authors: Saeed Kamarian

Abstract:

In this paper, free vibrations of Functionally Graded Sandwich (FGS) beams reinforced by randomly oriented Single-Walled Carbon Nanotubes (SWCNTs) are investigated. The Eshelby–Mori–Tanaka approach based on an equivalent fiber is used to investigate the material properties of the structure. The natural frequencies of the FGS nanocomposite beam are analyzed based on various Higher-order Shear Deformation Beam Theories (HSDBTs) and using an analytical method. The verification study represents the simplicity and accuracy of the method for free vibration analysis of nanocomposite beams. The effects of carbon nanotube volume fraction profiles in the face layers, length to span ratio and thicknesses of face layers on the natural frequency of structure are studied for the different HSDBTs. Results show that by utilizing the FGS type of structures, free vibration characteristics of structures can be improved. A comparison is also provided to show the difference between natural frequency responses of the FGS nanocomposite beam reinforced by aligned and randomly oriented SWCNT.

Keywords: sandwich beam, nanocomposite beam, functionally graded materials, higher-order beam theories, Mori-Tanaka approach

Procedia PDF Downloads 462
16786 Effects of Local Ground Conditions on Site Response Analysis Results in Hungary

Authors: Orsolya Kegyes-Brassai, Zsolt Szilvágyi, Ákos Wolf, Richard P. Ray

Abstract:

Local ground conditions have a substantial influence on the seismic response of structures. Their inclusion in seismic hazard assessment and structural design can be realized at different levels of sophistication. However, response results based on more advanced calculation methods e.g. nonlinear or equivalent linear site analysis tend to show significant discrepancies when compared to simpler approaches. This project's main objective was to compare results from several 1-D response programs to Eurocode 8 design spectra. Data from in-situ site investigations were used for assessing local ground conditions at several locations in Hungary. After discussion of the in-situ measurements and calculation methods used, a comprehensive evaluation of all major contributing factors for site response is given. While the Eurocode spectra should account for local ground conditions based on soil classification, there is a wide variation in peak ground acceleration determined from 1-D analyses versus Eurocode. Results show that current Eurocode 8 design spectra may not be conservative enough to account for local ground conditions typical for Hungary.

Keywords: 1-D site response analysis, multichannel analysis of surface waves (MASW), seismic CPT, seismic hazard assessment

Procedia PDF Downloads 246
16785 Enhancing Understanding and Engagement in Linear Motion Using 7R-Based Module

Authors: Mary Joy C. Montenegro, Voltaire M. Mistades

Abstract:

This action research was implemented to enhance the teaching of linear motion and to improve students' conceptual understanding and engagement using a developed 7R-based module called 'module on vectors and one-dimensional kinematics' (MVOK). MVOK was validated in terms of objectives, contents, format, and language used, presentation, usefulness, and overall presentation. The validation process revealed a value of 4.7 interpreted as 'Very Acceptable' with a substantial agreement (0. 60) from the validators. One intact class of 46 Grade 12 STEM students from one of the public schools in Paranaque City served as the participants of this study. The students were taught using the module during the first semester of the academic year 2019–2020. Employing the mixed-method approach, quantitative data were gathered using pretest/posttest, activity sheets, problem sets, and survey form, while qualitative data were obtained from surveys, interviews, observations, and reflection log. After the implementation, there was a significant difference of 18.4 on students’ conceptual understanding as shown in their pre-test and post-test scores on the 24-item test with a moderate Hake gain equal to 0.45 and an effect size of 0.83. Moreover, the scores on activity and problem sets have a 'very good' to 'excellent' rating, which signifies an increase in the level of students’ conceptual understanding. There also exists a significant difference between the mean scores of students’ engagement overall (t= 4.79, p = 0.000, p < 0.05) and in the dimension of emotion (t = 2.51, p = 0.03) and participation/interaction (t = 5.75, p = 0.001). These findings were supported by gathered qualitative data. Positive views were elicited from the students since it is an accessible tool for learning and has well-detailed explanations and examples. The results of this study may substantiate that using MVOK will lead to better physics content understanding and higher engagement.

Keywords: conceptual understanding, engagement, linear motion, module

Procedia PDF Downloads 131
16784 Control of a Stewart Platform for Minimizing Impact Energy in Simulating Spacecraft Docking Operations

Authors: Leonardo Herrera, Shield B. Lin, Stephen J. Montgomery-Smith, Ziraguen O. Williams

Abstract:

Three control algorithms: Proportional-Integral-Derivative, Linear-Quadratic-Gaussian, and Linear-Quadratic-Gaussian with the shift, were applied to the computer simulation of a one-directional dynamic model of a Stewart Platform. The goal was to compare the dynamic system responses under the three control algorithms and to minimize the impact energy when simulating spacecraft docking operations. Equations were derived for the control algorithms and the input and output of the feedback control system. Using MATLAB, Simulink diagrams were created to represent the three control schemes. A switch selector was used for the convenience of changing among different controllers. The simulation demonstrated the controller using the algorithm of Linear-Quadratic-Gaussian with the shift resulting in the lowest impact energy.

Keywords: controller, Stewart platform, docking operation, spacecraft

Procedia PDF Downloads 51
16783 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach

Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam

Abstract:

Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.

Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment

Procedia PDF Downloads 84
16782 Electrochemical Study of Interaction of Thiol Containing Proteins with As (III)

Authors: Sunil Mittal, Sukhpreet Singh, Hardeep Kaur

Abstract:

The affinity of thiol group with heavy metals is a well-established phenomenon. The present investigation has been focused on electrochemical response of cysteine and thioredoxin against arsenite (As III) on indium tin oxide (ITO) electrodes. It was observed that both the compounds produce distinct response in free and immobilised form at the electrode. The SEM, FTIR, and impedance studies of the modified electrode were conducted for characterization. Various parameters were optimized to achieve As (III) effect on the reduction potential of the compounds. Cyclic voltammetry and linear sweep voltammetry were employed as the analysis techniques. The optimum response was observed at neutral pH in both the cases, at optimum concentration of 2 mM and 4.27 µM for cysteine and thioredoxin respectively. It was observed that presence of As (III) increases the reduction current of both the moieties. The linear range of detection for As (III) with cysteine was from 1 to 10 mg L⁻¹ with detection limit of 0.8 mg L⁻¹. The thioredoxin was found more sensitive to As (III) and displayed a linear range from 0.1 to 1 mg L⁻¹ with detection limit of 10 µg L⁻¹.

Keywords: arsenite, cyclic voltammetry, cysteine, thioredoxin

Procedia PDF Downloads 211
16781 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons

Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung

Abstract:

Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.

Keywords: TEPC, lineal energy, microdosimetry, radiation quality

Procedia PDF Downloads 470
16780 Lateral Torsional Buckling: Tests on Glued Laminated Timber Beams

Authors: Vera Wilden, Benno Hoffmeister, Markus Feldmann

Abstract:

Glued laminated timber (glulam) is a preferred choice for long span girders, e.g., for gyms or storage halls. While the material provides sufficient strength to resist the bending moments, large spans lead to increased slenderness of such members and to a higher susceptibility to stability issues, in particular to lateral torsional buckling (LTB). Rules for the determination of the ultimate LTB resistance are provided by Eurocode 5. The verifications of the resistance may be performed using the so called equivalent member method or by means of theory 2nd order calculations (direct method), considering equivalent imperfections. Both methods have significant limitations concerning their applicability; the equivalent member method is limited to rather simple cases; the direct method is missing detailed provisions regarding imperfections and requirements for numerical modeling. In this paper, the results of a test series on slender glulam beams in three- and four-point bending are presented. The tests were performed in an innovative, newly developed testing rig, allowing for a very precise definition of loading and boundary conditions. The load was introduced by a hydraulic jack, which follows the lateral deformation of the beam by means of a servo-controller, coupled with the tested member and keeping the load direction vertically. The deformation-controlled tests allowed for the identification of the ultimate limit state (governed by elastic stability) and the corresponding deformations. Prior to the tests, the structural and geometrical imperfections were determined and used later in the numerical models. After the stability tests, the nearly undamaged members were tested again in pure bending until reaching the ultimate moment resistance of the cross-section. These results, accompanied by numerical studies, were compared to resistance values obtained using both methods according to Eurocode 5.

Keywords: experimental tests, glued laminated timber, lateral torsional buckling, numerical simulation

Procedia PDF Downloads 237
16779 Theoretical Approach to Kinetics of Transient Plasticity of Metals under Irradiation

Authors: Pavlo Selyshchev, Tetiana Didenko

Abstract:

Within the framework of the obstacle radiation hardening and the dislocation climb-glide model a theoretical approach is developed to describe peculiarities of transient plasticity of metal under irradiation. It is considered nonlinear dynamics of accumulation of point defects (vacancies and interstitial atoms). We consider metal under such stress and conditions of irradiation at which creep is determined by dislocation motion: dislocations climb obstacles and glide between obstacles. It is shown that the rivalry between vacancy and interstitial fluxes to dislocation leads to fractures of plasticity time dependence. Simulation and analysis of this phenomenon are performed. Qualitatively different regimes of transient plasticity under irradiation are found. The fracture time is obtained. The theoretical results are compared with the experimental ones.

Keywords: climb and glide of dislocations, fractures of transient plasticity, irradiation, non-linear feed-back, point defects

Procedia PDF Downloads 202
16778 Timetabling Communities’ Demands for an Effective Examination Timetabling Using Integer Linear Programming

Authors: N. F. Jamaluddin, N. A. H. Aizam

Abstract:

This paper explains the educational timetabling problem, a type of scheduling problem that is considered as one of the most challenging problem in optimization and operational research. The university examination timetabling problem (UETP), which involves assigning a set number of exams into a set number of timeslots whilst fulfilling all required conditions, has been widely investigated. The limitation of available timeslots and resources with the increasing number of examinations are the main reasons in the difficulty of solving this problem. Dynamical change in the examination scheduling system adds up the complication particularly in coping up with the demand and new requirements by the communities. Our objective is to investigate these demands and requirements with subjects taken from Universiti Malaysia Terengganu (UMT), through questionnaires. Integer linear programming model which reflects the preferences obtained to produce an effective examination timetabling was formed.

Keywords: demands, educational timetabling, integer linear programming, scheduling, university examination timetabling problem (UETP)

Procedia PDF Downloads 337
16777 The Linear Combination of Kernels in the Estimation of the Cumulative Distribution Functions

Authors: Abdel-Razzaq Mugdadi, Ruqayyah Sani

Abstract:

The Kernel Distribution Function Estimator (KDFE) method is the most popular method for nonparametric estimation of the cumulative distribution function. The kernel and the bandwidth are the most important components of this estimator. In this investigation, we replace the kernel in the KDFE with a linear combination of kernels to obtain a new estimator based on the linear combination of kernels, the mean integrated squared error (MISE), asymptotic mean integrated squared error (AMISE) and the asymptotically optimal bandwidth for the new estimator are derived. We propose a new data-based method to select the bandwidth for the new estimator. The new technique is based on the Plug-in technique in density estimation. We evaluate the new estimator and the new technique using simulations and real-life data.

Keywords: estimation, bandwidth, mean square error, cumulative distribution function

Procedia PDF Downloads 581
16776 Nonparametric Path Analysis with a Truncated Spline Approach in Modeling Waste Management Behavior Patterns

Authors: Adji Achmad Rinaldo Fernandes, Usriatur Rohma

Abstract:

Nonparametric path analysis is a statistical method that does not rely on the assumption that the curve is known. The purpose of this study is to determine the best truncated spline nonparametric path function between linear and quadratic polynomial degrees with 1, 2, and 3 knot points and to determine the significance of estimating the best truncated spline nonparametric path function in the model of the effect of perceived benefits and perceived convenience on behavior to convert waste into economic value through the intention variable of changing people's mindset about waste using the t test statistic at the jackknife resampling stage. The data used in this study are primary data obtained from research grants. The results showed that the best model of nonparametric truncated spline path analysis is quadratic polynomial degree with 3 knot points. In addition, the significance of the best truncated spline nonparametric path function estimation using jackknife resampling shows that all exogenous variables have a significant influence on the endogenous variables.

Keywords: nonparametric path analysis, truncated spline, linear, kuadratic, behavior to turn waste into economic value, jackknife resampling

Procedia PDF Downloads 47
16775 A Robust System for Foot Arch Type Classification from Static Foot Pressure Distribution Data Using Linear Discriminant Analysis

Authors: R. Periyasamy, Deepak Joshi, Sneh Anand

Abstract:

Foot posture assessment is important to evaluate foot type, causing gait and postural defects in all age groups. Although different methods are used for classification of foot arch type in clinical/research examination, there is no clear approach for selecting the most appropriate measurement system. Therefore, the aim of this study was to develop a system for evaluation of foot type as clinical decision-making aids for diagnosis of flat and normal arch based on the Arch Index (AI) and foot pressure distribution parameter - Power Ratio (PR) data. The accuracy of the system was evaluated for 27 subjects with age ranging from 24 to 65 years. Foot area measurements (hind foot, mid foot, and forefoot) were acquired simultaneously from foot pressure intensity image using portable PedoPowerGraph system and analysis of the image in frequency domain to obtain foot pressure distribution parameter - PR data. From our results, we obtain 100% classification accuracy of normal and flat foot by using the linear discriminant analysis method. We observe there is no misclassification of foot types because of incorporating foot pressure distribution data instead of only arch index (AI). We found that the mid-foot pressure distribution ratio data and arch index (AI) value are well correlated to foot arch type based on visual analysis. Therefore, this paper suggests that the proposed system is accurate and easy to determine foot arch type from arch index (AI), as well as incorporating mid-foot pressure distribution ratio data instead of physical area of contact. Hence, such computational tool based system can help the clinicians for assessment of foot structure and cross-check their diagnosis of flat foot from mid-foot pressure distribution.

Keywords: arch index, computational tool, static foot pressure intensity image, foot pressure distribution, linear discriminant analysis

Procedia PDF Downloads 499
16774 Agriculture Yield Prediction Using Predictive Analytic Techniques

Authors: Nagini Sabbineni, Rajini T. V. Kanth, B. V. Kiranmayee

Abstract:

India’s economy primarily depends on agriculture yield growth and their allied agro industry products. The agriculture yield prediction is the toughest task for agricultural departments across the globe. The agriculture yield depends on various factors. Particularly countries like India, majority of agriculture growth depends on rain water, which is highly unpredictable. Agriculture growth depends on different parameters, namely Water, Nitrogen, Weather, Soil characteristics, Crop rotation, Soil moisture, Surface temperature and Rain water etc. In our paper, lot of Explorative Data Analysis is done and various predictive models were designed. Further various regression models like Linear, Multiple Linear, Non-linear models are tested for the effective prediction or the forecast of the agriculture yield for various crops in Andhra Pradesh and Telangana states.

Keywords: agriculture yield growth, agriculture yield prediction, explorative data analysis, predictive models, regression models

Procedia PDF Downloads 314